• Table of Contents Selector View

    I wrote a new little view for a future version of Apollo that makes some changes to the default iOS version (that seems to be a weird trend in my recent programming, despite me loving built-in components). Here’s some details about it! It’s also available as a library on GitHub if you’re interested!

    Are you familiar with UITableView’s sectionIndexTitles API? The little alphabet on the side of some tables for quickly jumping to sections? Here’s a tutorial if you’re unfamiliar.

    This is a view very similar to that (very little in the way of originality here, folks) but offers a few nice changes I was looking for, so I thought I’d open source it in case anyone else wanted it too.

    Benefits

    The UITableView API is great, and you should try to stick with built-in components when you can avoid adding in unnecessary dependencies. That being said, here are the advantages this brought me:

    • 🐇 Symbols support! SF Symbols are so pretty, and sometimes a section in your table doesn’t map nicely to a letter. Maybe you have some quick actions that you could represent with a lightning bolt or bunny!
    • 🌠 Optional overlay support. I really liked on my old iPod nano how when you scrolled really quickly an a big overlay jumped up with the current alphabetical section you were in so you could quickly see where you were. Well, added!
    • 🖐 Delayed gesture activation to reduce gesture conflict. For my app, an issue I had was that I had an optional swipe gesture that could occur from the right side of the screen. Whenever a user activated that gesture, it would also activate the section index titles and jump everywhere. This view requires the user long-press it to begin interacting. No conflicts!
    • 🏛 Not tied to sections. If you have a less straight forward data structure for your table, where maybe you want to be able to jump to multiple specific items within a section, this doesn’t require every index to be a section. Just respond to the delegate and you can do whatever you want.
    • 🏓 Not tied to tables. Heck, you don’t even have to use this with tables at all. If you want to overlay it in the middle of a UIImageView and each index screams a different Celine Dion song, go for it.
    • 🏂 Let’s be honest, a slightly better name. The Apple engineers created a beautiful API but I can never remember what it’s called to Google. sectionIndexTitles doesn’t roll off the tongue.
    • 🌝 Haha moon emoji

    How to Install

    No package managers here. Just drag and drop TableOfContentsSelector.swift into your Xcode project. You own this code now. You have to raise it as your own.

    How to Use

    Create your view.

    let tableOfContentsSelector = TableOfContentsSelector()
    

    (Optional: set a font. Supports increasing and decreasing font for accessibility purposes)

    tableOfContentsSelector.font = UIFont.systemFont(ofSize: 12.0, weight: .semibold) // Default
    

    The table of contents needs to know the height it’s working with in order to lay itself out properly, so let it know what it should be

    tableOfContentsSelector.frame.size.height = view.bounds.height
    

    Set up your items. The items in the model are represented by the TableOfContentsItem enum, which supports either a letter (.letter("A")) case or a symbol case (.symbol(name: "symbol-sloth", isCustom: true)), which can also be a custom SF Symbol that you created yourself and imported into your project. As a helper, there’s a variable called TableOfContentsSelector.alphanumericItems that supplies A-Z plus ### just as the UITableView API does.

    let tableOfContentsItems: [TableOfContentsItem] = [
        .symbol(name: "star", isCustom: false),
        .symbol(name: "house", isCustom: false),
        .symbol(name: "symbol-sloth", isCustom: true)
        ] 
        + TableOfContentsSelector.alphanumericItems
    
    tableOfContentsSelector.updateWithItems(tableOfContentsItems)
    

    At this point add it to your subview and position it how you see fit. You can use sizeThatFits to get the proper width as well.

    Lastly, implement the delegate methods so you can find out what’s going on.

    func viewToShowOverlayIn() -> UIView? {
        return self.view
    }
    
    func selectedItem(_ item: TableOfContentsItem) {
        // You probably want to do something with the selection! :D
    }
    
    func beganSelection() {}
    func endedSelection() {}
    

    That’s it! If you’re curious, internally it’s just a single UILabel with a big ol' attributed string. Hope you enjoy!

  • More Efficient/Faster Average Color of Image

    Skip to the ‘Juicy Code 🧃’ section if you just want the code and don’t care about the preamble of why you might want this!

    Finding the average color of an image is a nice trick to have in your toolbelt for spicing up views. For instance on iOS, it’s used by Apple to make their pretty homescreen widgets where you put the average color of the image behind the text so the text is more readable. Here’s Apple’s News widget, and my Apollo widget, for instance:

    News and Apollo widgets on home screen

    Core Image Approach Pitfalls

    There’s lots of articles out there on how to do this on iOS, but all of the code I’ve encountered accomplishes it with Core Image. Something like the following makes it really easy:

    func coreImageAverageColor() -> UIColor? {
        // Shrink down a bit first
        let aspectRatio = self.size.width / self.size.height
        let resizeSize = CGSize(width: 40.0, height: 40.0 / aspectRatio)
        let renderer = UIGraphicsImageRenderer(size: resizeSize)
        let baseImage = self
        
        let resizedImage = renderer.image { (context) in
            baseImage.draw(in: CGRect(origin: .zero, size: resizeSize))
        }
    
        // Core Image land!
        guard let inputImage = CIImage(image: resizedImage) else { return nil }
        let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
    
        guard let filter = CIFilter(name: "CIAreaAverage", parameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
        guard let outputImage = filter.outputImage else { return nil }
    
        var bitmap = [UInt8](repeating: 0, count: 4)
        let context = CIContext(options: [.workingColorSpace: kCFNull as Any])
        context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBA8, colorSpace: nil)
    
        return UIColor(red: CGFloat(bitmap[0]) / 255, green: CGFloat(bitmap[1]) / 255, blue: CGFloat(bitmap[2]) / 255, alpha: CGFloat(bitmap[3]) / 255)
    }
    

    Core Image is a great framework capable of some insanely powerful things, but in my experience isn’t optimal for something as simple as finding the average color of an image because it takes up quite a bit more memory and time, things that you don’t have a lot of when creating widgets. That or I don’t know enough about Core Image (it’s a substantial framework!) to figure out how to optimize the above code (which is entirely possible, but hey the other solution is easier to understand, I think).

    You have around 30 MB of headroom with widgets, and from my tests the normal Core Image filter way was taking about 5 MB of memory just for the calculation. That’s about 17% of the total memory you get for the entire widget for a single operation, which could really hurt you if you’re up close to the limit. And you don’t want to break that 30MB limit if you can avoid it, from what I can see it seems iOS (understandably) penalizes you for it, and repeated offenses mean your widget doesn’t get updated as often.

    I’m no Core Image expert, but I’m guessing since it’s this super powerful GPU-based framework the memory consumption seems inconsequential when you’re doing crazy realtime image filters or something. But who knows, I’m just going off measurements.

    You can see in Xcode’s memory debugger very clearly when Core Image kicks in for instance, causing a little spike, and almost more concerning is that it doesn’t seem to normalize back down any time soon.

    Memory use before, spike

    (That might not be the most egregious example. It can be worse.)

    Just Iterating Over Pixels Approach

    An easy approach would just be to iterate over every pixel in the image, add up all their colors, then average them. Downside is there could be a lot of pixels (think of a 4K image), but thankfully for us we can just resize the image down a bunch first (fast), and the “gist” of the color information will be preserved and we have a lot less pixels to deal with.

    One other catch is that just ‘iterating over the pixels’ isn’t as easy as it sounds when the image you’re dealing with could be in a variety of different formats, (CMYK, RGBA, ARGB, BBQ, etc.). I came across a great answer on StackOverflow that linked to an Apple Technical Q&A that recommended just drawing out the image anew in a standard format you can always trust, so that solves that.

    Lastly, there’s some debate over which algorithm is best for averaging out all the colors in an image. Here’s a very interesting blog post that talks about how a sum of squares approach could be considered better. Through a bunch of tests, I see how it could be with approximating a bunch of color blocks of a larger imager, but the ‘simpler’ way by just summing seems to have better color results, and more closely mimics Core Image’s results. The code below includes both options, and I’ll include a comparison table so you can choose for yourself.

    The Juicy Code 🧃

    Here’s the code I landed on, feel free to change it as you see fit. I like to keep in lots of comments so if I come back to it later I can understand what’s going on, especially when it’s dealing with bitmasking and color profile bit structures and whatnot, which I don’t use often in my day-to-day and requires a bit of a rejogging of the Computer Sciencey part of my brain, and it’s really pretty simple once you read it over.

    extension UIImage {
        /// There are two main ways to get the color from an image, just a simple "sum up an average" or by squaring their sums. Each has their advantages, but the 'simple' option *seems* better for average color of entire image and closely mirrors CoreImage. Details: https://sighack.com/post/averaging-rgb-colors-the-right-way
        enum AverageColorAlgorithm {
            case simple
            case squareRoot
        }
        
        func findAverageColor(algorithm: AverageColorAlgorithm = .simple) -> UIColor? {
            guard let cgImage = cgImage else { return nil }
            
            // First, resize the image. We do this for two reasons, 1) less pixels to deal with means faster calculation and a resized image still has the "gist" of the colors, and 2) the image we're dealing with may come in any of a variety of color formats (CMYK, ARGB, RGBA, etc.) which complicates things, and redrawing it normalizes that into a base color format we can deal with.
            // 40x40 is a good size to resize to still preserve quite a bit of detail but not have too many pixels to deal with. Aspect ratio is irrelevant for just finding average color.
            let size = CGSize(width: 40, height: 40)
            
            let width = Int(size.width)
            let height = Int(size.height)
            let totalPixels = width * height
            
            let colorSpace = CGColorSpaceCreateDeviceRGB()
            
            // ARGB format
            let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue
            
            // 8 bits for each color channel, we're doing ARGB so 32 bits (4 bytes) total, and thus if the image is n pixels wide, and has 4 bytes per pixel, the total bytes per row is 4n. That gives us 2^8 = 256 color variations for each RGB channel or 256 * 256 * 256 = ~16.7M color options in total. That seems like a lot, but lots of HDR movies are in 10 bit, which is (2^10)^3 = 1 billion color options!
            guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width * 4, space: colorSpace, bitmapInfo: bitmapInfo) else { return nil }
    
            // Draw our resized image
            context.draw(cgImage, in: CGRect(origin: .zero, size: size))
    
            guard let pixelBuffer = context.data else { return nil }
            
            // Bind the pixel buffer's memory location to a pointer we can use/access
            let pointer = pixelBuffer.bindMemory(to: UInt32.self, capacity: width * height)
    
            // Keep track of total colors (note: we don't care about alpha and will always assume alpha of 1, AKA opaque)
            var totalRed = 0
            var totalBlue = 0
            var totalGreen = 0
            
            // Column of pixels in image
            for x in 0 ..< width {
                // Row of pixels in image
                for y in 0 ..< height {
                    // To get the pixel location just think of the image as a grid of pixels, but stored as one long row rather than columns and rows, so for instance to map the pixel from the grid in the 15th row and 3 columns in to our "long row", we'd offset ourselves 15 times the width in pixels of the image, and then offset by the amount of columns
                    let pixel = pointer[(y * width) + x]
                    
                    let r = red(for: pixel)
                    let g = green(for: pixel)
                    let b = blue(for: pixel)
    
                    switch algorithm {
                    case .simple:
                        totalRed += Int(r)
                        totalBlue += Int(b)
                        totalGreen += Int(g)
                    case .squareRoot:
                        totalRed += Int(pow(CGFloat(r), CGFloat(2)))
                        totalGreen += Int(pow(CGFloat(g), CGFloat(2)))
                        totalBlue += Int(pow(CGFloat(b), CGFloat(2)))
                    }
                }
            }
            
            let averageRed: CGFloat
            let averageGreen: CGFloat
            let averageBlue: CGFloat
            
            switch algorithm {
            case .simple:
                averageRed = CGFloat(totalRed) / CGFloat(totalPixels)
                averageGreen = CGFloat(totalGreen) / CGFloat(totalPixels)
                averageBlue = CGFloat(totalBlue) / CGFloat(totalPixels)
            case .squareRoot:
                averageRed = sqrt(CGFloat(totalRed) / CGFloat(totalPixels))
                averageGreen = sqrt(CGFloat(totalGreen) / CGFloat(totalPixels))
                averageBlue = sqrt(CGFloat(totalBlue) / CGFloat(totalPixels))
            }
            
            // Convert from [0 ... 255] format to the [0 ... 1.0] format UIColor wants
            return UIColor(red: averageRed / 255.0, green: averageGreen / 255.0, blue: averageBlue / 255.0, alpha: 1.0)
        }
        
        private func red(for pixelData: UInt32) -> UInt8 {
            // For a quick primer on bit shifting and what we're doing here, in our ARGB color format image each pixel's colors are stored as a 32 bit integer, with 8 bits per color chanel (A, R, G, and B).
            //
            // So a pure red color would look like this in bits in our format, all red, no blue, no green, and 'who cares' alpha:
            //
            // 11111111 11111111 00000000 00000000
            //  ^alpha   ^red     ^blue    ^green
            //
            // We want to grab only the red channel in this case, we don't care about alpha, blue, or green. So we want to shift the red bits all the way to the right in order to have them in the right position (we're storing colors as 8 bits, so we need the right most 8 bits to be the red). Red is 16 points from the right, so we shift it by 16 (for the other colors, we shift less, as shown below).
            //
            // Just shifting would give us:
            //
            // 00000000 00000000 11111111 11111111
            //  ^alpha   ^red     ^blue    ^green
            //
            // The alpha got pulled over which we don't want or care about, so we need to get rid of it. We can do that with the bitwise AND operator (&) which compares bits and the only keeps a 1 if both bits being compared are 1s. So we're basically using it as a gate to only let the bits we want through. 255 (below) is the value we're using as in binary it's 11111111 (or in 32 bit, it's 00000000 00000000 00000000 11111111) and the result of the bitwise operation is then:
            //
            // 00000000 00000000 11111111 11111111
            // 00000000 00000000 00000000 11111111
            // -----------------------------------
            // 00000000 00000000 00000000 11111111
            //
            // So as you can see, it only keeps the last 8 bits and 0s out the rest, which is what we want! Woohoo! (It isn't too exciting in this scenario, but if it wasn't pure red and was instead a red of value "11010010" for instance, it would also mirror that down)
            return UInt8((pixelData >> 16) & 255)
        }
    
        private func green(for pixelData: UInt32) -> UInt8 {
            return UInt8((pixelData >> 8) & 255)
        }
    
        private func blue(for pixelData: UInt32) -> UInt8 {
            return UInt8((pixelData >> 0) & 255)
        }
    }
    

    The Results

    Memory use after, no spike

    As you can see, we don’t see any memory spike whatsoever from the call. Yay! If anything, it kinda dips a bit. Did we find the secret to infinte memory?

    In terms of speed, it’s also about 4x faster. The Core Image approach takes about 0.41 seconds on a variety of test images, whereas the ‘Just Iterating Over Pixels’ approach (I need a catchier name) only takes 0.09 seconds.

    These tests were done on an iPhone 6s, which I like as a test device because it’s the oldest iPhone that still supports iOS 13/14.

    Comparison of Colors

    Lastly, here’s a quick comparison chart showing the differences between the ‘simple’ summing algorithm, the ‘sum of squares’ algorithm, and the Core Image filter. As you can see, especially for the second flowery image, the ‘simple/sum’ approach seems to have the most desirable results and closely mirrors Core Image.

    Comparison of average colors from Simple, Squared, and Core Image

    Okay, that’s all I got! Have fun with colors!

  • Trials and Tribulations of Making an Interruptable Custom View Controller Transition on iOS

    I think it’s safe to say while the iOS custom view controller transition API is a very powerful one, with that power comes a great deal of complexity. It can be tricky, and I’m having one of those days where it’s getting the better of me and I just cannot get it to do what I want it to do, even though what I want it to do seems pretty straightforward. Interruptible/cancellable custom view controller transitions.

    What I Want

    I built a little library called ChidoriMenu that effectively just reimplements iOS 14’s Pull Down Menus as a custom view controller for added flexibility.

    As it always goes, 99% of it went smoothly as could be, but then I was playing around in the Simulator with Apple’s version, and noticed with Apple’s you could tap outside the menu while it was being presented to cancel the presentation and it would smoothly retract. With mine, you have to wait for the animation to finish before dismissing. 0.4 seconds can be a long time. I NEED IT. The fluidity/cancellability of iOS' animations is one of the most fun parts of the operating system, and a big reason the iPhone X’s swipe up to go home feels so nice.

    Here is Apple’s with Toggle Slow Animations enabled to better illustrate how you can interrupt/cancel it.

    How I Implemented My Menu

    Mine’s pretty simple. Just a custom view controller presentation that is non-interactive, using an animation controller and a UIPresentationController subclass. You just tap to summon the menu, and tap away to close it, not really anything interactive, and virtually every tutorial on the web about interactive view controller transitions have “the interaction” being driven by something like UIPanGestureRecognizer, so it didn’t seem really needed in this case. So it’s just an animation controller that animates it on and off screen.

    Catch #1

    Well, how do I make this interruptable? Say I manually set the animation duration to 10 seconds, and then programatically dismiss it 2 seconds after it starts as a test.

    let tappedPoint = tapGestureRecognizer.location(in: view)
            
    let chidoriMenu = ChidoriMenu(menu: existingMenu, summonPoint: tappedPoint)
    present(chidoriMenu, animated: true, completion: nil)
    
    DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(2)) {
        chidoriMenu.dismiss(animated: true, completion: nil)
    }
    

    No dice. It queues up the dismissal and it occurs at the 10 second mark, right after the animation concludes. Not exactly interrupting anything.

    Okay, let’s see. Bruce Nilo and Michael Turner of the UIKit team did a great talk at WWDC 2016 about view controller transitions and making them interruptible.

    The animation is powered by UIViewPropertyAnimator, and they mention in iOS 10 they added a method called interruptibleAnimator(using:context:), wherein you return your animator as a means for the transition to be interruptible. They even state the following at the 25:40 point:

    If you do not implement the interaction controller, meaning you only implement a custom animation controller, then you need to implement animateTransition. And you would do so very simply, like this method. You take the interruptible animator that you would return and you would basically tell it to start.

    Which sounds great, as mine is just a normal, non-interactive animation controller. Let’s do that!

    var animatorForCurrentSession: UIViewPropertyAnimator?
        
    func interruptibleAnimator(using transitionContext: UIViewControllerContextTransitioning) -> UIViewImplicitlyAnimating {
        // Required to use the same animator for life of transition, so don't create multiple times
        if let animatorForCurrentSession = animatorForCurrentSession {
            return animatorForCurrentSession
        }
        
        let propertyAnimator = UIViewPropertyAnimator(duration: transitionDuration(using: transitionContext), dampingRatio: 0.75)
        propertyAnimator.isInterruptible = true
        propertyAnimator.isUserInteractionEnabled = true
    
        // ... animation set up goes here ...
        
        // Animate! 🪄
        propertyAnimator.addAnimations {
            chidoriMenu.view.transform = finalTransform
            chidoriMenu.view.alpha = finalAlpha
        }
        
        propertyAnimator.addCompletion { (position) in
            guard position == .end else { return }
            transitionContext.completeTransition(!transitionContext.transitionWasCancelled)
            self.animatorForCurrentSession = nil
        }
        
        self.animatorForCurrentSession = propertyAnimator
        return propertyAnimator
    }
    
    func animateTransition(using transitionContext: UIViewControllerContextTransitioning) {
        let interruptableAnimator = interruptibleAnimator(using: transitionContext)
        
        if type == .presentation {
            if let chidoriMenu: ChidoriMenu = transitionContext.viewController(forKey: UITransitionContextViewControllerKey.to) as? ChidoriMenu {
                transitionContext.containerView.addSubview(chidoriMenu.view)
            }
        }
        
        interruptableAnimator.startAnimation()
    }
    

    However, it still doesn’t interrupt it at the 2 second point, still opting to wait until the 10 second point that the animation completes. It calls the method, but it’s still not interruptible. I tried intercepting the dismiss call and calling .isReversed = true manually on the property animator, but it still waits 10 seconds before the completion handler is called.

    After that above quote, they then state “However, we kind of advise that you use an interaction controller if you’re going to make it interruptible.” so I’m going to keep that in mind.

    Catch #2

    Even if the above did work, it has to be powered by a user tapping outside the menu to close it. This is accomplished in my UIPresentationController subclass by adding a tap gesture recognizer to a background view, which then calls dismiss upon being tapped.

    override func presentationWillBegin() {
        super.presentationWillBegin()
    
        darkOverlayView.backgroundColor = UIColor(white: 0.0, alpha: 0.2)
        presentingViewController.view.tintAdjustmentMode = .dimmed
        containerView.addSubview(darkOverlayView)
    
        tapGestureRecognizer.addTarget(self, action: #selector(tappedDarkOverlayView(tapGestureRecognizer:)))
        darkOverlayView.addGestureRecognizer(tapGestureRecognizer)
    }
    
    @objc private func tappedDarkOverlayView(tapGestureRecognizer: UITapGestureRecognizer) {
        presentedViewController.dismiss(animated: true, completion: nil)
    }
    

    Problem is, all taps also refuse to be registered until the animation completes. And it’s not an issue with the UITapGestureRecognizer, adding a simple UIButton results in the same behavior where it becomes tappable as soon as the animation ends.

    (Note: when switching to an interactive transition below, UIPresentationController becomes freed up and accepts these touches.)

    All Signs Point to Interactive

    Between the advice of the UIKit engineers in the WWDC video, and the fact it doesn’t seem interactible during the presentation, let’s just bite the bullet and make it an interactive transition. Plus, the WWDC 2013 video on Custom Transitions Using View Controllers states (paraphrasing) “Interactive transitions don’t need to be powered by gestures only, anything iterable works”.

    My issue here is, what is iterating? It’s just a “fire and forget” animation from the tap of a button. Essentially the API works by incrementing a “progress” value throughout the animation so the custom transition is aware of where you’re at in the transition. For instance if you’re swiping back to dismiss, it would be a measurement from 0.0 to 1.0 of how close to the left side of the screen you are. There’s many examples online, Apple included, showing how to implement interactive view controllers powered by a UIPanGestureRecognizer, but I’m really having trouble wrapping my head around what is iterating or driving the progress updates here.

    The only thing I could really think of was CADisplayLink (which is basically just an NSTimer synchronized with the refresh rate of the screen — 60 times per second typically) that just tracks how long it’s been since the animation started. If it’s a 10 second animation, and 5 seconds have passed, you’re 50% done! Here’s an implementation, after I changed my animation controller to be a subclass of UIPercentDrivenInteractiveTransition rather than NSObject:

    var displayLink: CADisplayLink?
    var transitionContext: UIViewControllerContextTransitioning?
    var presentationAnimationTimeStart: CFTimeInterval?
    
    override func startInteractiveTransition(_ transitionContext: UIViewControllerContextTransitioning) {
        // ...
    
        self.transitionContext = transitionContext
        self.presentationAnimationTimeStart = CACurrentMediaTime()
    
        let displayLink = CADisplayLink(target: self, selector: #selector(displayLinkUpdate(displayLink:)))
        self.displayLink = displayLink
        displayLink.add(to: .current, forMode: .common)
    }
    
    @objc private func displayLinkUpdate(displayLink: CADisplayLink) {
        let timeSinceAnimationBegan = displayLink.timestamp - presentationAnimationTimeStart
        let progress = CGFloat(timeSinceAnimationBegan / transitionDuration(using: transitionContext))
        self.update(progress) // <-- secret sauce
    }
    

    Again, this seems kinda counter intuitive to me. In our case time powers the animation, and we’re trying to shoehorn it into an interactive progress API by measuring time itself. But hey, if it works, it works.

    But alas, it doesn’t.

    Catch #3

    The issue now is that, once the animation starts, it no longer obeys our custom timing curve. Mimicking Apple’s, we want our view controller to present with a subtle little bounce, rather than a boring, linear animation. But using CADisplayLink to power it results in the animation being shown with a linear animation, despite the interruptiblePropertyAnimator we returned looking like this: UIViewPropertyAnimator(duration: transitionDuration(using: transitionContext), dampingRatio: 0.75). See that damping? That’s springy! I even tried really spelling it out to the UIPercentDrivenInteractiveTransition with a self.timingCurve = propertyAnimator.timingParameters. No luck still.

    But wait, that’s really weird. I use interactive view controller transitions in Apollo to power the custom navigation controller animations, and I distinctly remember it annoyingly following the animation curve during the interactive transition. I specifically had to program around this, because when you’re actually interactive, say following a user’s finger, you need it to be linear so that it follows the finger predictably.

    Okay, so I check out Apollo’s code. Ah ha, I wrote it a few years back, so it uses the older school UIView.animate… rather than UIViewPropertyAnimator. Surely that can’t be it.

    … It was it.

    UIView.animate(withDuration: transitionDuration(using: transitionContext), delay: 0.0, usingSpringWithDamping: 0.75, initialSpringVelocity: 0, options: [.allowUserInteraction, .beginFromCurrentState]) {
        chidoriMenu.view.transform = finalTransform
        chidoriMenu.view.alpha = finalAlpha
    } completion: { (didComplete) in
        if (isPresenting && transitionContext.transitionWasCancelled) || (!isPresenting && !transitionContext.transitionWasCancelled) {
            presentingViewController.view.tintAdjustmentMode = .automatic
        }
        
        transitionContext.completeTransition(!transitionContext.transitionWasCancelled)
    }
    

    It works if I use the old school UIView.animate APIs in startInteractiveTransition and remove the interruptibleAnimator method, and CADisplayLink perfectly follows the animation curve. Okay what gives, implementing interruptibleAnimator was supposed to bridge this gap, there’s even a question on StackOverflow about it but I suppose that question doesn’t say anything about animation curves. So, bug maybe?

    End Result

    So I guess that kinda works? But this all feels so hacky. I don’t like CADisplayLink much here, it seems to have a few jitters when dismissing as opposed to the first solution (only on device, not Simulator), and it would be nice to know how to use it with the newer UIViewPropertyAnimator APIs. I get a general “fragile” feeling with my code here that I don’t really want to ship, so I reverted back to the initial, non-interactive solution. (Additional minor thing that might not even be possible is that Apple’s also allows you to add another one as the existing one is dismissing, which my code doesn’t do and I didn’t even realize was possible.) And worst of all, you ask? CADisplayLink means “Toggle Show Animations” in the Simulator doesn’t work for the animation anymore!

    (Maybe I just need to rebuild Apollo in SwiftUI.)

    Here’s some gists showing the two final “solutions”:

    A Call for Help

    If you know your way around the custom view controller transition APIs and have any insight, you’d be my favorite person on the planet. Making animations more interruptible would be a fun skill to learn, I’m just at wit’s end with trying to implement it. I’ve linked the gists in the previous paragraph, and ChidoriMenu in its entirety with the non-interactive implementation is also on GitHub.

    I’m curious if there’s a way to implement it without requiring an interactive transition, but if not, it’d be neat to know if it actually does require CADisplayLink, and if it does, it’d be neat to know what I’m still doing wrong in the above code, haha.

    DMs are open on my Twitter, feel free to reach out (alternatively my email is me@ my domain name).

  • Logging information from iOS Widgets

    Lately users have been emailing me with a few odd things happening with their Apollo iOS 14 home screen widgets, and some well-placed logs can really help with identifying what’s going wrong. iOS has a sophisticated built in logging mechanism, os_log, and now with SwiftLogger in iOS 14, but unfortunately they don’t provide an easy for users to provide you with the logs so they’re not optimal in this case.

    Normally I use CocoaLumberjack for this in Apollo because a logging can be pretty complex and I like to use a battle-tested solution, but for whatever reason I cannot get it working in my Widget Extension. I’ve tried setting it up to log to the shared app group container as well as disabling async logging to no avail.

    However this little logging use case in widgets is simple enough that I figure I’ll just whip up a simple little logger (per the suggestion of Brian Mueller), and I thought I’d include it here in case anyone else would benefit from it.

    The main gist of it is that it writes to the shared app group container (make sure you have App Groups set up) so both the Widget Extension as well as the main app can access it. It uses just a single file (that is created if it doesn’t exist), and once it gets too long (I defined as 2MB) it trims the the older half of the logs so that the log file doesn’t bloat unnecessarily (it does this by just finding a newline near half point from the Data, rather than reading the entire String into memory). It also automatically capture the line, file, and function the issue occurs in. Per Florian Bürger be careful with using DateFormatter willy-nilly, I’m not encountering any performance issues, but if you encounter any consider caching the DateFormatter instance for reuse.

    class WidgetLogger {
        static let fileURL: URL = {
            /// Write to shared app group container so both the widget and the host app can access
            return FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "group.com.christianselig.apollo")!.appendingPathComponent("widget.log")
        }()
        
        static func log(_ message: String, file: String = #file, function: String = #function, line: Int = #line) {
            let dateFormatter: DateFormatter = DateFormatter()
            dateFormatter.dateFormat = "MMM d, HH:mm:ss.SSS"
            let dateString = dateFormatter.string(from: Date())
            
            let timestampedMessage = "\(dateString) [\((file as NSString).lastPathComponent)/\(function)/\(line)]: \(message)\n"
            
            guard let messageData = timestampedMessage.data(using: .utf8) else {
                print("Could not encode String to Data.")
                return
            }
            
            if FileManager.default.fileExists(atPath: fileURL.path) {
                do {
                    let fileAttributes = try FileManager.default.attributesOfItem(atPath: fileURL.path)
                    let twoMegabytes = 2 * 1_024 * 1_024
    
                    // In order to avoid having a log file that is enormous, trim out the oldest entries if file size is larger than 3 MB
                    // (Checking file size is more performant than counting total lines each time)
                    if let size = fileAttributes[.size] as? Int, size > twoMegabytes {
                        // Find the first newline after the halfway point in the file, and only keep everything past that point to trim the file
                        let logsData = try Data(contentsOf: fileURL, options: .mappedIfSafe)
                        let newlineData = "\n".data(using: .utf8)!
                        let dataSize = logsData.count
                        let halfwayPoint = Int(CGFloat(dataSize) / CGFloat(2.0))
                        
                        guard let range = logsData.range(of: newlineData, options: [], in: halfwayPoint ..< dataSize) else {
                            assertionFailure("A newline should have been found")
                            return
                        }
                        
                        let remainingLogs = logsData.subdata(in: range.endIndex ..< dataSize)
                        try remainingLogs.write(to: fileURL, options: .atomicWrite)
                    }
                    
                    let fileHandle = try FileHandle(forWritingTo: fileURL)
                    fileHandle.seekToEndOfFile()
                    fileHandle.write(messageData)
                    fileHandle.closeFile()
                } catch {
                    print("Error trying to write to end of file: \(error)")
                }
            } else {
                do {
                    try timestampedMessage.write(to: fileURL, atomically: true, encoding: .utf8)
                } catch {
                    print("Error creating file to log to: \(error)")
                }
            }
        }
    }
    

    Usage:

    WidgetLogger.log("Called getTimeline at \(Date())`)
    

    You can then add a way for the user to email this file to you from within your app, I have a little “Logs” button that they can shoot over as part of troubleshooting. The code for attaching it to MFMailComposeViewController (which might not be the best choice with the iOS 14 feature of setting alternate email clients as the default, since that API doesn’t work with it yet) is:

    if let data = try? Data(contentsOf: WidgetLogger.fileURL) {
        let mailViewController = MFMailComposeViewController()
        mailViewController.addAttachmentData(data, mimeType: "text/plain", fileName: WidgetLogger.fileURL.lastPathComponent)
    }
    

    Or add it as a file to a UIActivityViewController that they can share:

    let activityViewController = UIActivityViewController(activityItems: [WidgetLogger.fileURL], applicationActivities: nil)
    

    Or just make it into a String and do whatever you want with it!

    String(contentsOf: WidgetLogger.fileURL, encoding: .utf8)
    

    Anyway, that’s it! Happy logging!

  • Using PHPickerViewController Images in a Memory-Efficient Way

    PHPickerViewController is (in my opinion) one of the more exciting parts of iOS 14. We developers now have a fully-fledged photo picker that we can just use, rather than having to spend a bunch of our time creating our own (much like SFSafariViewController did for developers and having to write in-app web browsers). Similar to SFSafariViewController it also has terrific privacy benefits, in that previously for our custom UIs, in order to show the pictures to choose from, we had to request access to all the user’s photos, which is not something users or developers really wanted to contend with. PHPickerController works differently in that iOS throws up the picker in a separate process, and the host app only sees the pictures that the user gave the app access to, and not a single one more. Much nicer!

    (Note we did/still do have UIImagePickerController, but many of us didn’t use it due to the missing functionality like selecting multiple photos that PHPickerController does brilliantly.)

    Apollo uses this API in iOS 14 to power its image uploader, so you can upload images directly into your comments or posts.

    How to Use

    The API is even really nice and simple to integrate. The only hitch I ran into is that the API callback when the user selects the photos provides you with essentially a bunch of objects that wrap NSItemProvider objects, which seemed a little intimidating at first glance versus something “simpler” like a bunch of UIImage objects (but there’s good reason they don’t do the latter).

    Presenting the picker in the first place is easy:

    var configuration = PHPickerConfiguration()
    configuration.selectionLimit = 10
    configuration.filter = .images
    configuration.preferredAssetRepresentationMode = .current // Don't bother modifying how they're represented since we're just turning them into Data anyway
    
    let picker = PHPickerViewController(configuration: configuration)
    picker.delegate = self
    present(picker, animated: true, completion: nil)
    

    But acting on the user’s selections is where you can have some trouble:

    func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
        /// What do I do here?! 👉🥺👈
    }
    

    In reality though, it’s not too hard.

    What Not to Do

    My first swing at bat was… not great. If the user selected a bunch of photos to upload and the images were decently sized (say, straight off a modern iPhone camera) the memory footprint of the app could temporarily swell to multiple gigabytes. Yeah, with a g. Caused some crashing and user confusion, understandably, and was quite silly of me.

    At first my naive solution was something along the lines of (simplified):

    var images: [UIImage] = []
            
    for result in results {
        result.itemProvider.loadObject(ofClass: UIImage.self) { (object, error) in
            guard let image = object as? UIImage else { return }
    
            guard let resizedImage: UIImage = UIGraphicsImageRenderer(size: CGSize(width: 2_000, height: 2_000)).image { (context) in
                image.draw(in: CGRect(origin: CGPoint.zero, size: newSize))
            } else { return }
    
            images.append(resizedImage)
        }
    }
    

    Long story short, decoding the potentially large image objects into full-fledged UIImage objects, and especially then going and re-drawing them to resize them is a very memory-expensive operation, which is multiplied with each image. Bad. Don’t do this. I know better. You know better.

    (If you’re curious for more information, Jordan Morgan has a great overview with his try! Swift NYC talk on The Life of an Image and there’s also an excellent WWDC session from 2018 called Image and Graphics Best Practices that goes even more in depth.)

    What You Should Do

    It’s a tiny bit longer because we have to dip down into Core Graphics, but don’t fret, it’s really not that bad. I’ll break it down.

    let dispatchQueue = DispatchQueue(label: "com.christianselig.Apollo.AlbumImageQueue")
    var selectedImageDatas = [Data?](repeating: nil, count: results.count) // Awkwardly named, sure
    var totalConversionsCompleted = 0
    
    for (index, result) in results.enumerated() {
        result.itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.image.identifier) { (url, error) in
            guard let url = url else {
                dispatchQueue.sync { totalConversionsCompleted += 1 }
                return
            }
            
            let sourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
            
            guard let source = CGImageSourceCreateWithURL(url as CFURL, sourceOptions) else {
                dispatchQueue.sync { totalConversionsCompleted += 1 }
                return
            }
            
            let downsampleOptions = [
                kCGImageSourceCreateThumbnailFromImageAlways: true,
                kCGImageSourceCreateThumbnailWithTransform: true,
                kCGImageSourceThumbnailMaxPixelSize: 2_000,
            ] as CFDictionary
    
            guard let cgImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsampleOptions) else {
                dispatchQueue.sync { totalConversionsCompleted += 1 }
                return
            }
    
            let data = NSMutableData()
            
            guard let imageDestination = CGImageDestinationCreateWithData(data, kUTTypeJPEG, 1, nil) else {
                dispatchQueue.sync { totalConversionsCompleted += 1 }
                return
            }
            
            // Don't compress PNGs, they're too pretty
            let isPNG: Bool = {
                guard let utType = cgImage.utType else { return false }
                return (utType as String) == UTType.png.identifier
            }()
    
            let destinationProperties = [
                kCGImageDestinationLossyCompressionQuality: isPNG ? 1.0 : 0.75
            ] as CFDictionary
    
            CGImageDestinationAddImage(imageDestination, cgImage, destinationProperties)
            CGImageDestinationFinalize(imageDestination)
            
            dispatchQueue.sync {
                selectedImageDatas[index] = data as Data
                totalConversionsCompleted += 1
            }
        }
    }
    

    Break it Down Now

    There’s a bit to unpack here, but I’ll try to hit everything.

    The core concept is we’re no longer loading the full UIImage and/or drawing it into a context each time (which can be monstrously large, and why PHPicker doesn’t just give us UIImage objects), especially because in my case I’m just uploading the Data and getting a resulting URL, I don’t ever need the image. But if you do, creating a UIImage from the smaller CGImage will be much better all the same.

    Okay! So we start off with a queue, and the data to be collected. loadFileRepresentation fires on an async queue, and the docs don’t mention if it executes serially (in practice, it does, but that could change), so create a queue to ensure you’re not writing to this array of Data across multiple threads. Also note that the array itself is set up in a way that we can maintain the order of the images, otherwise the order the user selected the photos in and the order they’re processed in may not line up 1:1. Lastly we keep a separate counter to know when we’re done.

    let dispatchQueue = DispatchQueue(label: "com.christianselig.Apollo.AlbumImageQueue")
    var selectedImageDatas = [Data?](repeating: nil, count: results.count) // Awkwardly named, sure
    var totalConversionsCompleted = 0
    

    Moving onto the main loop, instead of asking NSItemProvider to serve us up a potentially enormous UIImage, we approach more cautiously by requesting a URL to the image in the tmp directory. More freedom.

    result.itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.image.identifier) { (url, error) in
    

    We then go onto create a CGImage but with certain requirements around the image size so as to not create something larger than we need. These Core Graphics functions can seem a little intimidating, but between their names and the corresponding docs they paint a clear picture as to what they’re doing.

    let sourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
    
    guard let source = CGImageSourceCreateWithURL(url as CFURL, sourceOptions) else {
        dispatchQueue.sync { totalConversionsCompleted += 1 }
        return
    }
    
    let downsampleOptions = [
        kCGImageSourceCreateThumbnailFromImageAlways: true,
        kCGImageSourceCreateThumbnailWithTransform: true,
        kCGImageSourceThumbnailMaxPixelSize: 2_000,
    ] as CFDictionary
    
    guard let cgImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsampleOptions) else {
        dispatchQueue.sync { totalConversionsCompleted += 1 }
        return
    }
    

    Lastly, we convert this into Data with a bit of compression (only if it’s not a PNG though, PNGs are typically screenshots and whatnot, and I personally don’t want to hurt the quality of those).

    let data = NSMutableData()
    
    guard let imageDestination = CGImageDestinationCreateWithData(data, kUTTypeJPEG, 1, nil) else {
        dispatchQueue.sync { totalConversionsCompleted += 1 }
        return
    }
    
    // Don't compress PNGs, they're too pretty
    let isPNG: Bool = {
        guard let utType = cgImage.utType else { return false }
        return (utType as String) == UTType.png.identifier
    }()
    
    let destinationProperties = [
        kCGImageDestinationLossyCompressionQuality: isPNG ? 1.0 : 0.75
    ] as CFDictionary
    
    CGImageDestinationAddImage(imageDestination, cgImage, destinationProperties)
    CGImageDestinationFinalize(imageDestination)
    

    Now we have much smaller compressed Data objects kicking around, rather than our previously large UIImage objects, and we can POST those to an API endpoint for upload or whatever you’d like! Thanks to everyone on Twitter who gave me pointers here as well. In the end this went from spiking to in excess of 2GB to a small blip of 30MB for a few seconds.

    Adopt this API! It’s great!

  • Apollo for Reddit 1.9

    Apollo 1.9’s a massive update to Apollo that’s taken months and months to complete, but I’m really happy with the result, and it brings together a ton of ideas from the community to make Apollo even nicer to use. The update includes a variety of features around crossposts, flair, new app icons, translation, and quality of life improvements. Thanks to everyone who writes in via email or via the ApolloApp subreddit, your suggestions for what you want to see in Apollo help immensely and really motivate me to keep making Apollo better and better.

    Without further ado, here are the changes included in this 1.9 update to Apollo:

    Crosspost Viewing

    Crossposting (taking an existing post and reposting it to a similar subreddit) has been a big part of Reddit for ages, but recently it became a full-fledged feature where you can see exactly which subreddit it came from, and quickly jump to the original post. Apollo now supports this fully, so you can see the interesting content of the post, but also quickly jump over to read the original discussion! Often it’s like getting two interesting discussions in one!

    Viewing a crosspost in Apollo

    Crossposting

    Similar to being able to view crossposts, you can also easily perform a crosspost if you want as well! Simply select the post you want to crosspost, write a title, select the subreddit to crosspost it to, and bam, you’re off to the races.

    performing a crosspost in Apollo

    Image Flair

    Flair is a little “tag” users can add to their usernames in subreddit, and some subreddits even allow small images/icons to be added in addition to text, like the icon for your favorite sports team, or a character from your favorite TV show. Apollo now shows these beautifully!

    Viewing flair with images in Apollo

    Setting Your Flair

    In addition to being able to view the flair as discussed in the previous item, you can now set your own flair! Simply go to the subreddit of your choosing, and you can choose from a list of customizable flairs so you can add a little personality to your comments, showing which language you’re learning, your username in a video game the subreddit is about, your fitness goals, etc.

    Setting your flair in Apollo

    View Long Flair

    Some users set loooong flair, and as a result it can get off, which can be annoying when you’re trying to figure out what it says. Well be annoyed no longer, for you can simply tap on the long flair to bring up a window that expands it fully!

    Viewing long flair in Apollo

    Find Posts with Same Flair

    If the subreddit lets users tag their posts with individual flairs (say, being able to tag whether your question is about a certain character, or a certain topic), you can now simply tap on that flair and Apollo will show you all the other posts in the subreddit that have been tagged with that same flair.

    Filtering posts with the same flair in Apollo

    5 (Yeah, Five!) New App Icons!

    This update has taken a ton of time to work on, and as a result I was slightly behind in including the Ultra icons I wanted to include, but as a result there’s now a proper Icon Bonanza, with five new icons being included in this update. The first three are Ultra icons, all made by the same incredibly talented designer, Matthew Skiles, who I’ve been a fan of for a long time. I love how these turned out, we have our beloved Apollo mascot reimagined as an angel, a devil, as well as a zany pilot, all in gorgeous, colorful iconography. But those three icons aren’t all! Next up, we have a beautiful new Apollo icon representing the trans pride flag (originally created by Monica Helms), which came out really awesome and is a great addition. And last but not least, our incredible community designer, FutureIncident, makes his second appearance with the Japanese-inspired Apollo-san icon! I love this set of icons so much, it’s going to be really hard to choose.

    5 new app icons available in this Apollo update

    Easy Language Translation

    Reddit is home to a diverse set of communities that have a variety of fascinating conversations, but sometimes it’s tricky to understand what’s being said if the conversation is in a language you’re not familiar with. Heck, you might even have no idea what the language is! Now Apollo will be able to detect if the language of a comment or post is different than the language of your iOS device, and if so, offer to quickly translate it so you can understand the conversation! It is so handy, whether you’re following a fascinating conversation or even trying to learn a new language!

    Post/comment translation in Apollo

    Fast Subreddit Selector

    Whethering you’re trying to add a single subreddit to a filter, or adding multiple subreddits at a time to a multireddit, Apollo is now even faster at doing these tasks, with an auto-completing window that makes it super fast to search and add subreddits.

    Fast subreddit selector in Apollo

    Total Collapsed Comments & Remembering Collapsed Comments

    Two handy new additions to collapsing comments in Apollo. The first, Apollo will show you at a glance how many comments are in the collapsed conversation, which can be super handy for viewing a comment thread. The second thing, if you collapse a bunch of comments, and then come back to that same comment section later, Apollo will now remember which comments you had collapsed, and keep them collapsed for you!

    Total collapsed comments in Apollo

    New Settings, Filters Tweaks, Bug Fixes, and More!

    A bunch of awesome new settings have been added to Apollo, like being able to disable the auto-looping of videos with audio, or being able to make it so translation options always show up. Filtering is also even more powerful, with your filters being able to target flair and links as well (in addition to the title), and fixes a few filtering bugs. Apollo also now shows videos from Reddit’s experimental ‘RPAN’ service, which is essentially a kind of live stream post that you can now view within Apollo. Of course there’s a bunch of other small bug fixes around Apollo, from the occasional account accidentally signing out, to video bugs, to Apollo quitting in the background when it shouldn’t, as well as a bunch of other small tweaks across the app to improve your quality of life while browsing!

    Thank You!

    I really hope you enjoy the update, thank you for using Apollo! More great things to come!

  • The Case for Getting Rid of TestFlight Review

    I tweeted today about how I think TestFlight review should become a thing of the past and many developers seemed to agree, but some had questions so I wanted to expand on my thoughts a little.

    TestFlight’s awesome. But like App Store submissions, TestFlight betas also require a review by Apple. At first blush, such a review sounds sensical. TestFlight can distribute apps to up to 10,000 users. If that were to run completely unchecked you could have potentially mini-App Stores running around with sketchy apps being distributed to lots of people.

    But the point I’ll try to make in this article is that the current system TestFlight employs doesn’t do much to prevent this, and further creates a lot of friction for legitimate developers.

    The Review Process

    For TestFlight, when you submit a new version number, it requires a new review. But new build numbers do not (build numbers are like a secondary ID for a version as it goes through development). For instance, I could push a new version of Apollo to TestFlight, version 1.8 (build number 50) and it would need review, but builds 51, 52, 53, etc. of the same version do not require any review.

    The Problem

    Do you see the issue here? There’s not really any oversight into what you can change in those new builds. You could completely change your app into something different, upload it under a different build number, and so long as the previous version was approved and you don’t change the version number, you could send the new one out to thousands of people.

    Someone looking to distribute, say a console emulator (that Apple doesn’t allow in the App Store), could upload their app as a fun turtle themed calculator app (TurtleCalc™) and get approved on TestFlight, only to update it into that emulator for build 2 and send it out to thousands of people.

    As a Developer

    On the flip side, for an actual developer with an app on the App Store, it causes a ton of friction, because the other rule of TestFlight is such that once a new version goes live on the App Store, you can’t push any new builds to TestFlight without a new version and starting the review process again.

    So if you find a bug in the public version of your app, and want to beta test the fix, you have to wait a day or two for it to be reviewed by Apple before it can even go into beta testing. A 3-lines-of-code bug fix requires re-review, meanwhile, if you’re a bad actor and you just leave the app in TestFlight without ever pushing it to the App Store, you can just update it endlessly without any review whatsoever.

    That means as a developer you’re stuck in this gamble of “Should I just release it to the App Store without any testing? It’s just a bug fix after all, what could go wrong?” versus “Should I let it keep crashing and wait for the TestFlight review to occur so I can test this new build first, even if it means crashing for days more?”

    In a perfect world, you could push that fix out to testers immediately, validate the fix, then submit it to the App Store.

    As a result you have this system that A) doesn’t seem to do anything to stop people submitting nefarious updates but B) introduces a ton of friction to legitimate developers.

    “It Serves as an Early Review for the App Store Before Continued Development”

    Some argue that it lets you “test the waters” with an app or an update before submitting it to the App Store at large. For instance you have an idea that you’re not sure will get through app review, so you build a quick version of the app, and submit it to TestFlight, and the review will let you know if Apple will approve it.

    Unfortunately it doesn’t work like that. Getting through TestFlight review has no bearing on getting through the eventual App Store review. I’ve had builds go through TestFlight review, get the stamp of approval, test it in TestFlight for months, and then when I ultimately submit it the update gets rejected.

    TestFlight reviews are not at all an accurate way to gauge what the reviewers will think. It’s far more lax.

    It Often Requires Double Review

    Even more confusingly, if I decide to take the gamble and just release the bug fix to the App Store and hope all goes well, it’ll goes through a quick review, then it will go live on the App Store.

    But if I want the TestFlight users to use that same version that just got approved, they straight up can’t. Even though it went through the more strict public App Store review, the exact same build has to be reviewed separately for TestFlight. This adds a confusing delay for testers (not to mention extra work for Apple) and is very weird.

    TestFlight Review Takes Longer than App Store Review

    Despite being more lax a review process (as shown above), it takes longer to review. This kinda makes sense, you would hope the majory of staff would be focused on the public App Store review which affects the most users, but it feels bizarre to submit an app to the App Store and TestFlight at the same time (because double review) and the App Store version goes out the same day while the TestFlight version takes a day or two.

    This greatly disincentivizes testing builds when the process to actually get them out takes so long.

    There’s Already Workarounds

    A lot of developers, aware of the above constraints, employ strategies for getting around this process almost completely.

    • As soon as you submit the version to the App Store, you can immediately submit the same version plus one (so 1.8.3 on the App Store, 1.8.4 on TestFlight) even without any changes (just a bumped version number), get it through review, and then the next time you need to test a beta build you have an approved version you can start shoveling new builds onto.
    • An even more clever method some employ, is to just have an astronomically high version number only for TestFlight. So if your App Store version is 1.8, your TestFlight version is 1,0000. That way your TestFlight build is always ahead of the App Store version, and once that version gets approved the first time, you can indefinitely add new builds onto it. A lot of developers do this, and it’s clever, but I personally fear angering the App Store folks.

    You might be asking, “Okay… why not just do one of those methods then?”. And you totally can, but in neither case is the app actually being reviewed, in the first it’s an identical version that’s tweaked “secretly” later, and in the second it’s a single version that gets tweaked forever. This effectively shows how little the review process actually contributes.

    Getting Rid of TestFlight Review Could Speed up Normal Review

    If TestFlight review were to go away for the reasons outlined above, all the awesome folks on that team could be relocated to the “normal” App Store team, which could see an even faster review process. The review process is so much better now than it has been in the past, typically under a day (it used to be over a week!), but can you imagine submitting a build and it being available within a few hours being the norm? That would be fantastic!

    Solution

    I think just getting rid of it completely is fair. As shown, the current process does next to nothing to prevent people from distributing questionable builds, and instead is just a pain for legitimate developers.

    Is it possible that behind the scenes Apple re-reviews builds and might yank them if they find out they break their rules, say a game console app that’s been getting new builds but no new reviews from Apple for a year? Totally! And I think that’s the system they should simply extend everywhere.

    Do away with the review system all together, and have a random review process that occurs after the fact, every so often, perhaps transparently and based on the amount of testers in the beta (a beta with 8,000 users is more dangerous than one with three people).

    So you submit your version, it immediately goes out to all testers, and then a little while after Apple might flag it for random review. If it passes, it’s completely transparent to you. If it gets rejected, it’ll be pulled.

    End

    TestFlight’s great and I love it, but decreasing friction in beta testing would be a massive help.

  • Announcing Apollo: a new Reddit app for iPhone

    I’m really excited to unveil a project I’ve been working on for the last year or so. It’s called Apollo and it’s a new Reddit app for iPhone.

    I’ve been a Reddit user for about four years now, and the site is a constant source of interesting discussion, hilarity and news for me every day. I’ve never been completely happy with the current Reddit apps out there today, so I set out to scratch an itch and build the best Reddit experience on the iPhone that I could. And I’m really proud of the result.

    Apollo went through a really long design phase, and I sweated every detail. Last Spring I was lucky enough to get an offer to work at Apple as an intern for the summer, which meant no time for developing apps for a few months. But after that summer, I learned so much from so many smart people, had a really cool new language to experiment with and my motivation to build something incredible had never been higher.

    Since then I’ve been working super hard to build this app, and today I’m finally at a stage where I can comfortably announce it. It’s not available yet, and won’t be for a little while yet, but it’s getting close and I’d love to have some input. (I made a Reddit thread here) I’ll also be launching a public beta in the coming weeks, so keep an eye out for that if you want to get an early look at what’s to come.

    Apollo's frontpage and inbox

    I really put an emphasis on making Apollo feel at home on the iPhone with a super comfortable browsing experience. It has beautiful, large images, smooth gestures, really nicely organized comments and I baked in a lot of the great features that iOS 8 brought about. There’s a ton more as well. I also made sure that it took advantage of as many of Reddit’s great features as possible.

    From a technical standpoint, it’s built for the most part in Swift. I’ve been really happy with the language so far (bar a few issuse) and it was awesome to build an app with it.

    I’ve made a page where you can find out more about it, and if you’d like sign up to be notified when it’s released: https://apolloapp.io

    I’d love to hear input. You can reach me on Twitter, post in the Reddit thread or email me if you’d like. I’ll also be posting updates on my dribbble page.

    Can’t wait to share more in the coming weeks!