• Qi2 is kinda underwhelming

    April 20, 2024

    Using MagSafe for portable battery packs has so many niceties versus Qi1:

    • Increased communication with the device, allowing for better efficiency due to better thermal management and charging
    • Easily view the charge percentage of the external battery when first attaching it, and at any other point right from the OS
    • Reverse-wireless-charging, so if you charge your phone while the pack is attached, the phone will charge up first and then send energy to the battery pack
    • Magnets for better charging reliability (no vibrating off the small charging zone and waking up to a dead phone) and better efficiency (induction points are perfectly lined up), though this point is almost always mimicked in Qi1 battery packs

    Qi2 was supposed to be a glass of ice water to those in hell of Qi1, and I was hyped! Apple stopped making MagSafe battery packs themselves, and their old pack used Lightning instead of the newer USB-C, so I was excited to see third-parties bring MagSafe into the golden age of USB-C.

    The reality though is… kinda lame right now with most of those benefits not being a thing?

    Precursor: most “MagSafe” batteries are just Qi1

    Fred from Scooby Doo unmasking some person who has a disguise on that says 'totally legit MagSafe' and when the disguise is off it's revealed it's just Qi1 with a magnet

    So many magnetic battery pack makers say “MagSafe compatible” on the product page, which leads people to think they’re getting the more efficient charging of MagSafe as well as the extra functionality.

    The word “compatible” is doing a lot of heavy lifting there, just indicating that the battery packs have a magnet in them and using just regular Qi1 charging. None of the actual MagSafe benefits are available. This means they’re kinda “dumb” and don’t communicate well with the host device, leading to hotter devices (and thus faster battery degradation) and lower efficiency due to energy loss as heat.

    “Just use cables!”

    Cables are much more efficient than wireless charging, so this sounds like a great idea until you try it, and going through an airport with cables dangling and potentially snagging on things is so, so much less convenient than just having a slightly thicker phone. I’d take a dip in efficiency for a massive increase in convenience, but if you can deal with cable charging while running around, I tip my hat to you.

    That being said, the battery cases of yesteryear were a nice middle ground.

    The only Qi2 battery pack is kinda lacking?

    Despite being announced last year, there’s still like… only one manufacturer offering Qi2 battery packs: Anker. The rest are still “coming soon”.

    Outside of only being offered in bulky sizes, Anker’s offerings seem to miss some of the biggest niceties of MagSafe, presumably through no fault of their own.

    Firstly, Qi2 battery packs seemingly don’t even support OS level battery status! I can only assume this is an omission on Apple’s part rather than Anker’s, and is hopefully fixed in the future, but that was one of the aspects of Qi2 I was looking forward to the most. All you get is a slightly larger indicator of the phone’s battery level, but not the pack’s. Being able to easily see the percentage of your battery pack when using the phone and connecting it is super handy.

    A minor one, but it also seems to get slightly warmer than Apple’s offering. Qi2 is supposed to also offer 15W of output whereas Apple’s battery was 7.5W, but Max Tech did a thorough review versus Apple’s old pack, and while it’s worth a watch, the tl;dw is that it seems to warm up quickly and slow down pretty considerably as a result (and be no faster than 7.5W).

    Lastly, there’s no reverse wireless charging like Apple’s MagSafe pack has, so if you’re charging your iPhone over USB-C and have the pack attached, the pack won’t charge. You’d have to plug the pack itself in, which would transfer more heat to the iPhone rather than the other way around with MagSafe (I’d rather have the cheap battery pack get hotter, rather than the expensive iPhone).

    Why Apple’s old pack is actually good

    Apple battery widget showing status of MagSafe battery pack'
    Credit: Apple

    Apple’s offering on paper seems pretty great: smaller than the competitors, and integrates perfectly with iOS, meaning you get more intelligent charging and power delivery (and a cooler device), and you can see the status of the battery right in the operating system.

    Two issues though: price/availability (at almost twice the competitors’), and the Lightning connector (in our beautiful new USB-C world, I do not want to be carrying a Lightning cable around anymore).

    The first one’s indeed tricky. As Apple’s is discontinued, they’re hard to find, and there’s a lot of counterfeit ones online and in local classifieds, so be careful.

    For the Lightning issue, the iPhone supports reverse wireless charging (Qi2 does not on iPhones as of April 2024), so if you plug in your USB-C iPhone with the pack still attached, it’ll charge up the battery after it’s done charging up the phone! No Lightning cable needed. In fact, even the new Qi2 batteries don’t support this, they only support plugging in the battery itself which charges the iPhone, which sounds fine, but the battery is the one inductively charging the iPhone, so the iPhone bears the brunt of the heat, rather than the other way around, which is less than ideal for battery health.

    Lastly, Apple’s is much thinner than the competitor’s offerings. Which is fine, I don’t need to double my battery life, I just want to extend it when I know I might otherwise be cutting it close by the end of the day.

    (Though one bonus for Qi2 battery packs is that they do support wired charging between the battery and iPhone though, unlike Apple’s, which would be a handy feature for charging faster in a pinch, but not a deal-breaker for me.)

    “Apple’s battery capacity is so small”

    There were some strange musings at the beginning complaining that Apple’s is only 1,500 mAh, while everyone else is 5,000 mAh, and that’s a perfect indication of why mAh is such a terrible unit for measuring batteries (the recent Vision Pro battery size story being another one). Battery capacity is a function of Amp-hours and crucially Voltage (multiply them together to get Watt-hours, an actual measurement of capacity), Apple’s battery uses twice the Voltage (7.6 V versus 3.7 V), so the actual capacities are 11 Watt-hours for Apple’s and 18.5 Watt-hours for others.

    Further, if you take the smallest previewed Qi2 case: Belkin’s 5,000 mAh option (available in Australia), it’s 17mm thick, where Apple’s is only 11mm.

    So Apple’s battery capacity being 40% smaller than Belkin’s (11 versus 18.5) kinda makes sense when you see that it’s because it’s 35% thinner.

    End notes

    All in all, maybe someone like Belkin will release their Qi2 and it’ll be faster, more energy dense, less hot than Apple’s, and have USB-C, but even then at least as of April 2024 it will still lack the super handy OS-level battery status, as well as reverse-charging. Maybe Apple will add those in iOS 18 and will be well in the world, or maybe Apple will surprise us all and release a new, USB-C MagSafe battery pack.

    (Also, my one criticism of all battery packs is they and the iPhone really need a magnet connection near the bottom too, right now the top-half is secure but the bottom half can just swing around. That part kinda makes me miss old school battery cases.)


  • Waterfield's weirdly compact Apple Vision Pro Case

    April 2, 2024

    Vision Pro on top of a blue lunchbox style case for it, next to a black and white cat looking at its paws

    Disclosure: Waterfield sent this in exchange for a review. Yeah, that probably colors something on a deep-down, subconscious level, but I won’t say anything that I don’t truly believe.

    Unlike a phone or laptop, the Vision Pro is one of those products that is particularly tricky to take around without a case. I’ve got around this by wrapping it in a hoodie and throwing it in my backpack, but I was looking for a more… tidy solution longterm.

    Apple’s own case was an obvious option, but the size kinda scared me. I like packing pretty light for trips, and only ever bring one bag, so the thought of half the bag being taken up by a Vision Pro case wasn’t the most alluring, so a compact size was pretty near the top of my list, so when Waterfield announced their case offering and toted its size my ears immediately perked up.

    I’ve traveled a bit with it now, and I’ve really come to like it. Here’s my thoughts, plus some questions from folks on Twitter and Mastodon.

    Design/build quality

    Case open with Vision Pro nestled snuggly inside with an additional fleece inner case that holds accessories. HomePod mini and iPod nano in background.

    The design is reminiscent of a really well-made lunchbox. It’s sturdy, and the outside feels like that ballistic nylon material that a good backpack is made out of, while the inside is a really soft fleece. The inside houses a second (also fleece-wrapped) case which can house all the Vision Pro accessories and even has individual slots for ZEISS lenses if you have those. In mine I put a charging brick, the polishing cloth, and the headband (note on that below), and spare contact lenses in the ZEISS slots. There’s a separate spot on the “ceiling” of the inner case for the Vision Pro’s external battery.

    Case open with battery slid into a fleece pocket on the lid of the case, with a black cat in the background.

    Everything just fits super snuggly and thoughtfully, which is kinda what I like most about it. So many Vision Pro cases on the market are just versions for other headsets that happen to fit the Vision Pro to different levels of success, and while that’s obviously totally fine, maybe it’s because the Vision Pro was so expensive, but there’s something really nice feeling about a case designed specifically around it. It’s like using a baseball mitt made just for your hand versus borrowing your friend’s: both are cool, one just makes you go oooooo.

    (Though the battery fits so snuggly that it’s a bit tricky to get in when you wrap the cable around it. Instead, wrap the cable around your fingers, slide the battery in, and then slide the organized cable on top of the battery.)

    The zippers feel great, not YKK (oops, I’ve been told they are YKK, that makes sense given the quality, they just have nice ‘Waterfield’ branding on them) but metal and have a water-resistant coating which I always like to see. It has a grab handle on top, and attachments for shoulder straps that I likely won’t use. Same with the top, it has a little zipper pocket that I’m not sure I’d use beyond the AirTag pocket in it, but you could put something thin up there (like another cord, or a small external battery, or maybe a very small foldable external keyboard), and even if you don’t use it the pocket is pretty flat so you don’t lose any room to it.

    Water droplets beading on top of the case and zipper.

    I do wish they had more colorways. I’m on Apple’s train and not a big fan of wrapping my devices in dead animals (though Apple’s FineWoven solution there seems to have missed the mark, but Aptera has some really cool plant-based biodegradable leathers for their car), and wish Waterfield had options outside of black and blue for non-leather (that black and white stormtrooper style one looks so cool). That being said the photographs of the blue on their website almost don’t do it justice, it’s a really nice navy in real life with just the right amount of color to be a bit fun. But I still want stormtrooper!

    Compactness

    Case open with various items inside, including a banana, an iPhone 15 Pro, a light bulb, a VHS copy of The Mogul, and a Blue Eyes White Dragon card.
    Some items you may have around for scale

    It’s really compact, there’s honestly not really any room to possibly shrink it further. It’s not tiny per se, but going off numbers from each website, Apple’s Vision Pro case is about 10.9 Liters in volume (0.38 cubic feet), and the Waterfield case is 5.0 Liters (0.18 cubic feet), which is a substantial difference. If you throw it in a 20 Liter everyday carry backpack, you’ve gone from it taking 55% of the interior space to just 25%.

    A big part of how they accomplish this is by having you take the headband off which saves a ton of room length-wise versus storing it fully expanded. This is something I was hoping someone would do well before this case was even announced, and it’s plain to see how much room it saves.

    The separate fleece inner case that holds items
    The cutely named “Hat Box” that zippers open to add accessories

    My idea was just to fold the fabric of the headband in a bit, and when I saw Waterfield required you to actually disconnect the headband I was kind of disappointed because that sounds like a pain. But in all honesty, if you kept the band connected, you would have to bend it more on the side part than I would be personally comfortable to get it as compact (see below, though), and if you only folded in the back part (and not the sides) it would add a decent amount more length to the case. Still a bunch of space savings to be sure, but in my opinion unless you’re putting it in the case every night the compactness this creates is worth the minor inconvenience of disconnecting the headband.

    Protectiveness

    Dumbbell sitting on top of a book ('what if?' by Randall Monroe) sitting on top of the case showing no deforming.

    I was somewhat worried where it’s not an actual hardshell style case like Apple’s or others that it would be more like a tech pouch and not have much protectiveness, but honestly it’s pretty darn sturdy. That’s hard to articulate, but as an example if I put a 20 lb dumbbell on top of it with nothing inside, you can see it doesn’t deform at all. (This does not mean it will survive a 20 lb dumbbell actually dropping on it, to be clear.)

    It definitely won’t be as protective as a hardcase, but it’s still pretty darn protective. Mine will always be stored in my backpack but if it were to take a small fall alone I personally wouldn’t worry.

    Questions

    I asked on Twitter and Mastodon if anyone had any questions about it, and there were some great ones that I thought I’d answer here.

    Do you still need Apple’s front cover?

    The inside is soft felt so I personally don’t bother, but it does still fit if you’re so inclined.

    Do you HAVE to take the headband off to store it?

    Technically no, it fits with the band still attached, but to me it’s like that scene in Cinderella when all the stepsisters try to fit in the glass slipper and are shoving their foot in to just barely make it fit. In other words, it seems to put a bit more pressure on the side of the headband than I’d like, but hey, if you want to risk it the $99 to replace the Solo Knit band if I’m right is one of the more affordable Vision Pro accessories. (All this also applies to the Dual Loop band, too.)

    Vision Pro with the Solo Loop band still on and showing it quite squished at the extremity.

    Could a cat sleep on it?

    If a 20 lb dumbbell doesn’t deform it I think most cats could sleep on it fine, the issue is that it’s kinda small so not the most comfy. My cats stick to sleeping on my backpacks.

    Does the battery fit in it? Could it hit the glass?

    Yeah, there’s a proper space right above the Vision Pro. If you put it in the intended way (front of Vision Pro in toward the non-zippered edge) the Vision Pro’s front will be toward the front of the case, and the battery in the ceiling will be toward the back of the case, sitting on the storage accessory, so no chance of contact.

    Can the battery stay plugged in in the case?

    Yeah! I find the standby life of the Vision Pro isn’t the best, and it sometimes gets warm, so I personally would unplug it for travel, but you definitely don’t have to.

    Does a Mac mini fit inside?

    A Mac mini diagonally in the case indicating it won't quite fit.

    Nope, just slightly more compact. That would have been cool though.

    Can you use it like a lunchbox?

    Honestly felt is a pretty effective thermal insulator so probably, but I’d worry about condensation build up.

    Ease of zipping/speed of use

    I find water-resistant zippers always have a bit more friction than their normal counterparts, so it’s a bit of a two-handed operation to zip open/closed. But I feel like if I was on an airplane, it’d be pretty quick to disconnect the headband, throw it on the pouch, throw in the Vision Pro, zip it up and leave. Not as fast as yeeting your Vision Pro into a backpack if there was like an emergency, but pretty reasonable if you have a second.

    What does it carry?

    The case on a table showing the items it can carry, including the Vision Pro, a USB-C cable, the two included headbands, an AirTag, the battery, AirPods Pro, a polishing cloth, a PSA 9 first edition Zubat Pokémon card, the inner fleece case, two contact lenses, and a charging brick.

    It can carry the Vision Pro, both headbands, polishing cloth, a wall brick, a USB-C cable, ZEISS inserts (or contact lenses), an AirTag, and something small in the top pocket. Here’s what I have in mine.

    Conclusion

    At $159 for my non-leather version, it’s not cheap (though it’s cheaper than Apple’s own case), but I keep coming back to it reminding me of a really nice backpack. You can go on Amazon and find an obscure brand backpack for super cheap that will absolutely get the job done at the cost of long term confidence, or, if you want to treat yourself you can buy a really quality backpack from a trusted brand with a bunch of delightful touches that make you smile when you use it even years later. Some people get weirdly into nice backpacks, I’m unfortunately one of them.

    So all in all, it’s a great example of how something seemingly simple can be elevated by thoughtful design and quality materials.

    Non-affiliate link to the Waterfield case


  • Recreating Apple's beautiful visionOS search bar

    March 24, 2024

    visionOS Music app in the Joshua Tree environment with the window showing a search bar at the top with rounded corners

    Many of Apple’s own visionOS apps, like Music, Safari, and Apple TV, have a handy search bar front and center on the window so you can easily search through your content. Oddly, as of visionOS 1.1, replicating this visually as a developer using SwiftUI or UIKit is not particularly easy due to lack of a direct API, but it’s still totally possible, so let’s explore how.

    First let’s get a few ideas out of the way to maybe save you some time.

    On the SwiftUI side .searchable() in is an obvious API to try, but even with the placement API, there’s no way to put in the center (by default it’s to the far right, and you can either put it to the far left, or under the navigation bar, by passing different values). With toolbarRole, similar deal, values like .browser will put it to the left instead, but not middle. ToolbarItem(placement: .principal) meets a similar fate, as in visionOS, the principal position is to the left, not center.

    Basic SwiftUI window with search bar to the left and text in the middle that simply says 'Perhaps the coolest View ever'
    Default SwiftUI searchable() position

    In UIKit, the situation is similar, where navigationItem.titleView is to the left, not center, on visionOS, and I was unable to find any other APIs that worked here.

    You could technically recreate navigation bar UIView/View from scratch, but navigation bars on visionOS have a nice progressive blur background that wouldn’t be fun to recreate, not to mention all the other niceties they have.

    All this to say, it’s totally possible there’s a clear API to do it, but I’ve dug around and poked a bunch of different people so it’s well hidden if it does exist! I’m assumning Apple’s using an internal-only API, or at least a custom UI here.

    SwiftUI doesn’t directly have the concept of a search bar view unfortunately, just the .searchable modifier that only takes a few arguments, so… you know…

    That Simpsons meme where they say 'Say the line, Bart!' but he responds 'Let's use UIKit' with much sadness

    We’ll create a SwiftUI interface into UIKit’s UISearchBar that allows us to store the typed text and respond when the user hits enter/return.

    struct SearchBar: UIViewRepresentable {
        @Binding var text: String
        var onSearchButtonClicked: () -> Void
    
        func makeUIView(context: Context) -> UISearchBar {
            let searchBar = UISearchBar()
            searchBar.delegate = context.coordinator
            return searchBar
        }
    
        func updateUIView(_ uiView: UISearchBar, context: Context) {
            uiView.text = text
        }
    
        func makeCoordinator() -> Coordinator { SearchBarCoordinator(self) }
    }
    
    class SearchBarCoordinator: NSObject, UISearchBarDelegate {
        var parent: SearchBar
    
        init(_ searchBar: SearchBar) {
            self.parent = searchBar
        }
    
        func searchBar(_ searchBar: UISearchBar, textDidChange searchText: String) {
            parent.text = searchText
        }
    
        func searchBarSearchButtonClicked(_ searchBar: UISearchBar) {
            parent.onSearchButtonClicked()
            searchBar.resignFirstResponder()
        }
    }
    

    Now we can easily use it as so:

    struct ContentView: View {
        @State private var searchText = ""
    
        var body: some View {
            SearchBar(text: $searchText) {
                print("User hit return")
            }
        }
    }
    
    Search bar at the very top but taking up full width so it overlaps the title in an ugly way

    Hmm, looks a little off.

    Step 2: Positioning

    Cool, we have a search bar, how do we position it? Again, tons of ways to do this. Perhaps the “most correct” way would be to completely wrap a UINavigationBar or UIToolbar, add a UISearchBar as a subview and then move it around in layoutSubviews relative to the other bar button items, titles, and whatnot. But that’s probably overkill, and we want a simple SwiftUI solution, so (as the great Drew Olbrick suggested) we can just overlay it on top of our NavigationStack.

    NavigationStack {
        Text("Welcome to my cool view")
            .navigationTitle("Search")
        }
    }
    .overlay(alignment: .top) {
        SearchBar(text: $searchText) {
            print("User hit return")
        }
    }
    

    This is actually great, as we get all the niceties of the normal SwiftUI APIs, and the system even appropriately spaces our search bar from the top of the window. Only issue is an obvious one, the width is all wrong. Studying how Apple does it, in the Music and Apple TV app the search bar just stays a stationary width as the window can’t get too narrow, but let’s modify ours slightly a bit so if it does get too narrow, our search bar never takes up more than half the window’s width (Apple’s probably does something similar, but more elegantly), by wrapping things in a GeometryReader. The height is fine to stay as-is.

    struct SearchBar: View {
        @Binding var text: String
        var onSearchButtonClicked: () -> Void
        
        var body: some View {
            GeometryReader { proxy in
                InternalSearchBar(text: $text, onSearchButtonClicked: onSearchButtonClicked)
                    .frame(width: min(500.0, proxy.size.width / 2.0))
                    .frame(maxWidth: .infinity, alignment: .center)
            }
        }
    }
    
    struct InternalSearchBar: UIViewRepresentable {
        @Binding var text: String
        var onSearchButtonClicked: () -> Void
    
        func makeUIView(context: Context) -> UISearchBar {
            let searchBar = UISearchBar()
            searchBar.delegate = context.coordinator
            return searchBar
        }
    
        func updateUIView(_ uiView: UISearchBar, context: Context) {
            uiView.text = text
        }
    
        func makeCoordinator() -> SearchBarCoordinator { SearchBarCoordinator(self) }
    }
    
    class SearchBarCoordinator: NSObject, UISearchBarDelegate {
        var parent: InternalSearchBar
    
        init(_ searchBar: InternalSearchBar) {
            self.parent = searchBar
        }
    
        func searchBar(_ searchBar: UISearchBar, textDidChange searchText: String) {
            parent.text = searchText
        }
    
        func searchBarSearchButtonClicked(_ searchBar: UISearchBar) {
            parent.onSearchButtonClicked()
            searchBar.resignFirstResponder()
        }
    }
    

    Which results in…

    Search bar at the top of the window, centered horizontally and not taking up the full width

    Bam.

    Step 3: Corner radius

    Our corner radius looks different than Apple’s at the top of the article!

    One oddity I noticed is different Apple apps on visionOS use different corner radii despite being that same, front and center search bar. (Rounded rectangle: Apple TV, Photos, App Store; circular: Music, Safari) Presumably this is just an oversight, but after poking some Apple folks it seems like the rounded option is the correct one in this case, and I too prefer the look of that, so let’s go with that one.

    One issue… The default is a rounded rectangle, not circular/capsule, and API to directly change this (as far as I can tell) is private API. But cornerRadius is just a public API on CALayer, so we just have to find the correct layer(s) and tweak them so they’re circular instead. We can do this by subclassing UISearchBar and monitoring its subviews for any changes to their layer’s corner radius, and changing those layers to our own circular corner radius.

    class CircularSearchBar: UISearchBar {
        private var didObserveSubviews = false
        private let desiredCornerRadius = 22.0
        
        override func willMove(toWindow newWindow: UIWindow?) {
            super.willMove(toWindow: newWindow)
            
            guard !didObserveSubviews else { return }
            observeSubviews(self)
            didObserveSubviews = true
        }   
        
        func observeSubviews(_ view: UIView) {
            view.layer.addObserver(self, forKeyPath: "cornerRadius", options: [.new], context: nil)
            view.subviews.forEach { observeSubviews($0) }
        }
        
        override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
            guard keyPath == "cornerRadius" else {
                super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context)
                return
            }
            
            guard let layer = object as? CALayer else { return }
            guard layer.cornerRadius != desiredCornerRadius else { return }
            
            layer.cornerRadius = desiredCornerRadius
        }
    }
    

    Which gives us this beautiful, circular result once we replace UISearchBar with CircularSearchBar.

    Search bar at the top of the window with a fully circular corner radius

    Step 4: Remove hairline

    A hairline border underneath the search bar in the center
    Nooo, what IS that?

    Just when you think you’re done, you notice there’s a little hairline border underneath the search bar that looks kinda off in our context. This is also not easily addressable with an API, but we can find it ourselves and hide it. You’d think you’d just find a thin UIView and hide it, but Apple made this one nice and fun by making it a normal sized image view set to an image of a thin line.

    Knowing that, we could find the image view and sets its image to nil, or hide it, but through something done behind the scenes those operations seem to be overwritten, however just setting the alpha to 0 also hides it perfectly.

    private func hideImageViews(_ view: UIView) {
        if let imageView = view as? UIImageView {
            imageView.alpha = 0.0
        }
        
        view.subviews.forEach { hideImageViews($0) }
    }
    

    And add hideImageViews(self) to our willMove(toWindow:) method.

    Search bar at the top of the window, without any border underneath, shown in an app called Penguin Finder with a penguin as the window's background image with a progressive blur at the top under the search bar
    That's it! 🎉

    With that, we’re done and we should have nice solution for a search bar that more closely mimics how visionOS shows prominent search bars, at least until Apple hopefully adds a more straightforward way to do this! (FB13696963)


  • Trials and tribulations of 360° video in Juno

    February 25, 2024

    In building Juno, a visionOS app for YouTube, a question that’s come up from users a few times is whether it supports 360° and 180° videos (for the unfamiliar, it’s an immersive video format that fully surrounds you). The short answer is no, it’s sort of a niche feature without much adoption, but for fun I wanted to take the weekend and see what I could come up with. Spoiler: it’s not really possible, but the why is kinda interesting, so I thought I’d write a post talking about it, and why it’s honestly not a big loss at this stage.

    How do you even… show a 360° video?

    Logo for Apple's RealityKit framework, which is three stacked rectangles, dark grey, grey, then yellow at the top, with a 3D sphere, a cylinder, and a cone in white sitting on top

    It’s actually a lot easier than you might think. A 360° (or 180°) video isn’t some crazy format in some crazy shape, it’s just a regular, rectangular video file in appearance, but it’s been recorded and stored in that rectangle slightly warped, with the expectation that however you display it will unwarp it.

    So how do you display it? Also pretty simply, you just create a hollow sphere, and you tell your graphical engine (in iOS’ case: RealityKit) to stretch the video over the inside of the sphere. Then you put the user at the center of that sphere (or half-sphere, in the case of 180° video), and bam, immersive video.

    There’s always a catch

    In RealityKit, you get shapes, and you get materials you can texture those shapes with. But you can’t just use anything as a texture, silly. Applying a texture to a complex 3D shape can be a pretty intensive thing to do, so RealityKit basically only wants images, or videos (a bit of an oversimplification but it holds for this example). You can’t, for instance, show a scrolling recipe list or a dynamic map of your city and stretch that over a cube. “Views” in SwiftUI and UIKit (labels, maps, lists, web views, buttons, etc.) are not able to be used as a material (yet?).

    This is a big problem for us. If you don’t remember, while Juno obviously shows videos, it uses web views to accomplish this, as it’s the only way YouTube allows developers to show YouTube videos (otherwise you could, say, avoid viewing ads which YouTube doesn’t want), and I don’t want to annoy Google/YouTube.

    Web views, while they show a video, are basically a portal into a video watching experience. You’re just able to see the video, you don’t have access to the underlying video directly, so you can’t apply it as a texture with RealityKit. So we can’t show it on a sphere, so we can’t do 360° video.

    Unless…

    Let’s get inventive

    Vision from Wandavision TV show saying 'What is video, if not a bunch of images persevering?'
    Do I get points for including Vision in an article about Vision Pro

    Okay, so we only have access to the video through a web view. We can see the video though, so what if we just continue to use the web player, and as it plays for the user, we took snapshots of each video frame and painted those snapshots over the sphere that surrounds the user. Do it very rapidly, say, 24 times per second, and you effectively have 24 fps video. Like a flip book!

    Well, easier said than done! The first big hurdle is that when you take a snapshot of a webview (WKWebView), everything renders into an image perfectly… except the playing video. I assume this is because the video is being hardware decoded in a way that is separate from how iOS performs the capture, so it’s absent. (It’s not because of DRM or anything like that, it also occurs just for local videos on my website.)

    This is fixable though, with JavaScript we can draw the HTML video element into a separate canvas, and then snapshot the canvas instead.

    const video = document.querySelector('video');
    const canvas = document.createElement('canvas');
    
    canvas.width = video.videoWidth;
    canvas.height = video.videoHeight;
    
    var ctx = canvas.getContext('2d');
    ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    
    video.style.display = 'none';
    document.body.prepend(canvas);
    

    Okay, now we have the video frame visible. How do we capture it? There’s a bunch of different tactics I tried for this, and I couldn’t quite get any of them to be fast enough to be able to simulate 24 FPS (in order to get 24 captured frames per second, each frame capture must be less than 42 ms). But let’s enumerate them from slowest to fastest in taking a snapshot of a 4K video frame (average of 10 runs).

    CALayer render(in: CGContext)

    Renders a CALayer into a CGImage.

    UIGraphicsBeginImageContextWithOptions(webView.bounds.size, true, 0.0)
    
    let context = UIGraphicsGetCurrentContext()!
    
    webView.layer.render(in: context)
    
    let image = UIGraphicsGetImageFromCurrentImageContext()
    
    UIGraphicsEndImageContext()
    

    ⏱️ Time: 270 ms

    Metal texture

    (Code from Chris on StackOverflow)

    extension UIView {
        func takeTextureSnapshot(device: MTLDevice) -> MTLTexture? {
            let width = Int(bounds.width)
            let height = Int(bounds.height)
            
            if let context = CGContext(data: nil, width: width, height: height,bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue), let data = context.data {
                layer.render(in: context)
                
                let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: width, height: height, mipmapped: false)
                
                if let texture = device.makeTexture(descriptor: desc) {
                    texture.replace(region: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0, withBytes: data, bytesPerRow: context.bytesPerRow)
                    return texture
                }
            }
            
            return nil
        }
    }
    
    let texture = self.webView.takeTextureSnapshot(device: MTLCreateSystemDefaultDevice()!)
    

    ⏱️ Time: 250 ms (I really thought this would be faster, and maybe I’m doing something wrong, or perhaps Metal textures are hyper-efficient once created, but take a bit to create in the first place)

    UIView drawHierarchy()

    let rendererFormat = UIGraphicsImageRendererFormat.default()
    rendererFormat.opaque = true
    
    let renderer = UIGraphicsImageRenderer(size: webView.bounds.size, format: rendererFormat)
    
    let image = renderer.image { context in
        webView.drawHierarchy(in: webView.bounds, afterScreenUpdates: false)
    }
    

    ⏱️ Time: 150 ms

    JavaScript transfer

    What if we relied on JavaScript to do all the heavy lifting, and had the canvas write its contents into a base64 string, and then using WebKit messageHandlers, communicate that back to Swift?

    const video = document.querySelector('video');
    const canvas = document.createElement('canvas');
    
    canvas.width = video.videoWidth;
    canvas.height = video.videoHeight;
    
    var ctx = canvas.getContext('2d');
    ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    
    video.style.display = 'none';
    document.body.prepend(canvas);
    
    // 🟢 New code
    const imageData = canvas.toDataURL('image/jpeg'); 
    webkit.messageHandlers.imageHandler.postMessage(imageData);
    

    Then convert that to UIImage.

    func userContentController(_ userContentController: WKUserContentController, didReceive message: WKScriptMessage) {
        if message.name == "imageHandler", let dataURLString = message.body as? String {
            let image = convertToUIImage(dataURLString)
        }
    }
    
    private func convertToUIImage(_ dataURLString: String) -> UIImage {
        let dataURL = URL(string: dataURLString)!
        let data = try! Data(contentsOf: dataURL)
        return UIImage(data: data)!
    }
    

    ⏱️ Time: 130 ms

    WKWebView takeSnapshot()

    webView.takeSnapshot(with: nil) { image, error in
        self.image = image
    }
    

    ⏱️ Time: 70 ms

    Test results

    As you can see, the best of the best got to about 14 frames per second, which isn’t quite up to video playback level. Close, but not quite. I’m out of ideas.

    There were some interesting suggestions to use the WebCodecs VideoFrame API, or an OffscreenCanvas, but maybe due to my lack of experience with JavaScript I couldn’t get them meaningfully faster than the above JavaScript code with a normal canvas.

    If you have another idea, that you get working, I’d love to hear it.

    Why not just get the direct video file then?

    There’s two good answers to this question.

    First, the obvious one, Google/YouTube doesn’t like this. If you get the direct video URL, you can circumvent ads, which they’re not a fan of. I want Juno to happily exist as an amazing visionOS experience for YouTube, and Google requests you do so through the web player, and I think I can build an awesome app with that so that’s fine by me. 360° video is a small feature and I don’t think it’s worth getting in trouble over.

    Secondly, having access to the direct video still wouldn’t do you any good. Why? Codecs.

    Battle of the codecs

    Quick preamble. For years, pretty much all web video was H264. Easy. It’s a format that compresses video to a smaller file size while still keeping a good amount of detail. The act of uncompressing it is a little intensive (think, unzipping a big zip file), so computers have dedicated chips specifically built to do this mega fast. You can do it without these, purely in software, but it takes longer and consumes more power, so not ideal.

    Time went on, videos got bigger, and the search for something that compresses video even better than H264 started (and without licensing fees). The creatively named H265 (aka HEVC) was born, and Apple uses it a bunch (it still has licensing fees, however). Google went in a different direction and developed VP9 and made it royalty-free, though there were still concerns around patents. These formats can produce video files that are half the size of H264 but with the same visual quality.

    Apple added an efficient H265 hardware decoder to the iPhone 7 back in 2016, but to my knowledge VP9 decoding is done completely in software to this day and just relies on the raw power and efficiency of Apple’s CPUs.

    Google wanted to use their own VP9 format, and for 4K videos and above, only VP9 is available, no H264.

    Okay, and?

    So if we want to play back a 4K YouTube video on our iOS device, we’re looking at a VP9 video plain and simple. The catch is, you cannot play VP9 videos on iOS unless you’re granted a special entitlement by Apple. The YouTube app has this special entitlement, called com.apple.developer.coremedia.allow-alternate-video-decoder-selection, and so does Safari (and presumably other large video companies like Twitch, Netflix, etc.)

    But given that I cannot find any official documentation on that entitlement from Apple, safe to say it’s not an entitlement you or I are going to be able to get, so we cannot play back VP9 video, meaning we cannot play back 4K YouTube videos. Your guess is as good as mine why, maybe it’s very complex to implement if there’s indeed not a native hardware decoder, so Apple doesn’t like giving it out. So if you want 4K YouTube, you’re looking at either a web view or the YouTube app.

    Apple keynote slide for the M3 chip listing AV1 decode support

    Given that no one could agree on a video format, everyone went back to the drawing board, formed a collective group called the Alliance for Open Media (has Google, Apple, Samsung, Netflix, etc.), and authored the AV1 codec, hopefully creating the one video format to rule them all, with no licensing fees and hopefully no patent issues.

    Google uses this on YouTube, and Apple even added a hardware decoder for AV1 in their latest A17 and M3 chips. This means on my iPhone 15 Pro I can play back an AV1 video in iOS’ AVPlayer like butter.

    Buuuuttttt, the Apple Vision Pro ships with an M2, which has no such hardware decoder.

    Why it’s not a big loss

    So the tl;dr so far is YouTube uses the VP9 codec for 4K YouTube, and unless you’re special, you can’t playback VP9 video directly, which we need to do to be able to project it onto a sphere. Why not just do 1080p video?

    Because even 4K video looks bad in 360 degrees.

    Wait, what? Yeah, 4K looks incredible on the big TV in front of you, but you have to remember for 360° video, that same resolution is completely surrounding you. At any given point, the area you’re looking at is a small subset of the full resolution! In other words, the Vision Pro’s resolution is 4K per eye, meaning any area you look can show a 4K image, and when you stretch a 4K video all around you, everywhere you look is not 4K. Almost like the Vision Pro’s resolution per eye drops enormously. If you’re familiar with the pixels per degree (PPD) measurement for VR headsets, 4K immersive video has a quite bad PPD measurement.

    To test this further, I downloaded a 4K 360° video and projected it onto a sphere. The video is stretched from my feet to over my head. When I look straight, I’d say I’m looking at maybe 25% of the total height of the video. That means in a 4K video, which is 2,160 pixels tall, I can see maybe 25% of those pixels, or 540 pixels, so it looks a bit better than a 480p video but far from even 720p.

    Quick attempted visualization, showing the area you look at with a 4K TV:

    A TV in a living room in the center of your vision, labeled as 2160 pixels in height.

    Versus the area you look at a 4K 360° video:

    A visualization of being fully immersed in a 4K video, showing the center point that you're actually looking at only being maybe 540 pixels in height.

    So in short, it might be 4K, but it’s stretched over a far more massive area than you’re used to when you think about 4K. Imagine your 4K TV is the size of your wall and you’re watching it from a foot away, it’d be immersive, but much less sharp. That means in reality it only looks a bit better than 480p or so.

    So while it’d be cool to have 4K 360° video in Juno, I don’t think it looks good enough that it’s that compelling an experience.

    Enter 8K

    For the demo videos on the Apple Vision Pro (and the videos they show at the Apple Store), those are recorded in 8K, which gives you twice as many vertical and horizontal pixels to work with, and it levels up the experience a ton. Apple wasn’t flexing here, I’d say 8K is the minimum you want for a compelling, immersive video experience.

    A person in a yellow jacket walking across a rope spread over a valley, with a large camera on a crane recording her.
    Apple's impressive 8K immersive video recording setup

    And YouTube does have 8K, 360° videos! They’re rare since the hardware to record that isn’t cheap, but they are available. And pretty cool!

    But if I was a betting man, I doubt that’s ever coming to the first generation Vision Pro.

    Why? As mentioned 8K video on YouTube is only available in VP9 and AV1. The Vision Pro does not have a hardware AV1 decoder as it has an M2 not an M3, so it would have to do it in software. Testing on my M1 Pro MacBook Pro, which seems to Geekbench similarly to the Vision Pro, trying to playback 8K 360° video in Chrome is quite choppy and absolutely hammers my CPU. Apple’s chips may be powerful enough to grunt through 4K without a hardware decoder, but it doesn’t seem you can brute force 8K without a hardware decoder.

    Maybe I’m wrong or missing something, or Google works with Apple and re-encodes their 360° videos in a specific, Vision-Pro-only H265 format, but I’m not too hopeful that this generation of the product, without an M3, will have 8K 360° YouTube playback. That admittedly is an area the Quest 3 has the Vision Pro beat, in that its Qualcomm chip has an AV1 decoder.

    Does this mean the Vision Pro is a failure and we’ll never see 8K immersive video? Not at all, you could do it in a different codec, Apple has already shown it’s possible, I’m just not too hopeful for YouTube videos at this stage.

    In Conclusion

    As a developer, playing back the codec YouTube uses for its 4K video does not appear possible. It also doesn’t seem possible to snapshot frames fast enough to project it in realtime 3D. And even if it was, 4K video does not look too great unfortunately, ideally you want 8K, which seems even less likely.

    But dang, it was a fun weekend learning and trying things. If you manage to figure out something that I was unable to, tell me on Mastodon or Twitter, and I’ll name the 3D Theater in Juno after you. 😛 In the meantime, I’m going to finish up Juno 1.2!

    Additonal thanks to: Khaos Tian, Arthur Schiller, Drew Olbrich, Seb Vidal, Sanjeet Suhag, Eric Provencher, and others! ❤️


  • My little Apple Vision Pro stand

    February 19, 2024

    Three-quarter shot of the Vision Pro stand showing the Vision Pro standing vertically on a stand with a Pringle chip style top that it rests on, connected to the base with a walnut dowel, with the base holding the battery with an organizer for the cable

    I want somewhere to put my Vision Pro when not in use. Many people use the original box, and there’s beautiful stands that exist out there, but I was looking for something more compact and vertical so it would take up less room on my desk.

    So I opened Fusion 360 (which I am still very much learning), grabbed my calipers, and set out to design a little stand. There was interest when I showed the first version, so I set out to tidying it up a bit before making it available. Mainly the rod going up to the pringle part was a bit weak, so I ended up beefing up the diameter to 3/4". This also now means you can either 3D print the rod, or pick up a bespoke 3/4" rod of your own in a cool material, like walnut, maple, brass, steel, copper, etc. for pretty cheap from Home Depot or Amazon that is 215 mm in length. Then just use some superglue to bind them.

    Anywho, I quite like the end result, it’s compact, seems pretty secure, holds the battery and has a spot for the cable and rubber feet. Mine is printed in marble PLA with a $3 walnut dowel. It’s designed around my 23W light seal, I admittedly don’t know how it works with other sizes.

    Close up shot of the bottom of the stand holding a battery and having an area to keep the cables tidy

    You can download it on MakerWorld, I like them because I can send the designs to my Bambu printer really easily, and they give free filament with enough downloads, haha. You can also download just the top pringle if you want a lens shield for your bag or something. If you really like the design please give it a Boost on Makerworld too!

    For a period of time, if you say the promo code “free free free for me me me” to your computer, the download will be available for free.

    I’m unfortunately not assembling/selling this myself, but if you don’t have a 3D printer (I quite like my Bambu P1S, if you’re looking for a suggestion), lots of local libraries have 3D printers nowadays, or you can submit it to sites like PCBWay to have them make it for you. I don’t quite have the expertise/time to manufacture and sell this myself for folks without a 3D printer, but if you the reader would like to on Etsy or something, go for it! (Just maybe link credit back, and if you make a bunch of money donate some to your local animal shelter and I’d be stoked.)