• Trials and tribulations of 360° video in Juno

    February 25, 2024

    In building Juno, a visionOS app for YouTube, a question that’s come up from users a few times is whether it supports 360° and 180° videos (for the unfamiliar, it’s an immersive video format that fully surrounds you). The short answer is no, it’s sort of a niche feature without much adoption, but for fun I wanted to take the weekend and see what I could come up with. Spoiler: it’s not really possible, but the why is kinda interesting, so I thought I’d write a post talking about it, and why it’s honestly not a big loss at this stage.

    How do you even… show a 360° video?

    Logo for Apple's RealityKit framework, which is three stacked rectangles, dark grey, grey, then yellow at the top, with a 3D sphere, a cylinder, and a cone in white sitting on top

    It’s actually a lot easier than you might think. A 360° (or 180°) video isn’t some crazy format in some crazy shape, it’s just a regular, rectangular video file in appearance, but it’s been recorded and stored in that rectangle slightly warped, with the expectation that however you display it will unwarp it.

    So how do you display it? Also pretty simply, you just create a hollow sphere, and you tell your graphical engine (in iOS’ case: RealityKit) to stretch the video over the inside of the sphere. Then you put the user at the center of that sphere (or half-sphere, in the case of 180° video), and bam, immersive video.

    There’s always a catch

    In RealityKit, you get shapes, and you get materials you can texture those shapes with. But you can’t just use anything as a texture, silly. Applying a texture to a complex 3D shape can be a pretty intensive thing to do, so RealityKit basically only wants images, or videos (a bit of an oversimplification but it holds for this example). You can’t, for instance, show a scrolling recipe list or a dynamic map of your city and stretch that over a cube. “Views” in SwiftUI and UIKit (labels, maps, lists, web views, buttons, etc.) are not able to be used as a material (yet?).

    This is a big problem for us. If you don’t remember, while Juno obviously shows videos, it uses web views to accomplish this, as it’s the only way YouTube allows developers to show YouTube videos (otherwise you could, say, avoid viewing ads which YouTube doesn’t want), and I don’t want to annoy Google/YouTube.

    Web views, while they show a video, are basically a portal into a video watching experience. You’re just able to see the video, you don’t have access to the underlying video directly, so you can’t apply it as a texture with RealityKit. So we can’t show it on a sphere, so we can’t do 360° video.

    Unless…

    Let’s get inventive

    Vision from Wandavision TV show saying 'What is video, if not a bunch of images persevering?'
    Do I get points for including Vision in an article about Vision Pro

    Okay, so we only have access to the video through a web view. We can see the video though, so what if we just continue to use the web player, and as it plays for the user, we took snapshots of each video frame and painted those snapshots over the sphere that surrounds the user. Do it very rapidly, say, 24 times per second, and you effectively have 24 fps video. Like a flip book!

    Well, easier said than done! The first big hurdle is that when you take a snapshot of a webview (WKWebView), everything renders into an image perfectly… except the playing video. I assume this is because the video is being hardware decoded in a way that is separate from how iOS performs the capture, so it’s absent. (It’s not because of DRM or anything like that, it also occurs just for local videos on my website.)

    This is fixable though, with JavaScript we can draw the HTML video element into a separate canvas, and then snapshot the canvas instead.

    const video = document.querySelector('video');
    const canvas = document.createElement('canvas');
    
    canvas.width = video.videoWidth;
    canvas.height = video.videoHeight;
    
    var ctx = canvas.getContext('2d');
    ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    
    video.style.display = 'none';
    document.body.prepend(canvas);
    

    Okay, now we have the video frame visible. How do we capture it? There’s a bunch of different tactics I tried for this, and I couldn’t quite get any of them to be fast enough to be able to simulate 24 FPS (in order to get 24 captured frames per second, each frame capture must be less than 42 ms). But let’s enumerate them from slowest to fastest in taking a snapshot of a 4K video frame (average of 10 runs).

    CALayer render(in: CGContext)

    Renders a CALayer into a CGImage.

    UIGraphicsBeginImageContextWithOptions(webView.bounds.size, true, 0.0)
    
    let context = UIGraphicsGetCurrentContext()!
    
    webView.layer.render(in: context)
    
    let image = UIGraphicsGetImageFromCurrentImageContext()
    
    UIGraphicsEndImageContext()
    

    ⏱️ Time: 270 ms

    Metal texture

    (Code from Chris on StackOverflow)

    extension UIView {
        func takeTextureSnapshot(device: MTLDevice) -> MTLTexture? {
            let width = Int(bounds.width)
            let height = Int(bounds.height)
            
            if let context = CGContext(data: nil, width: width, height: height,bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue), let data = context.data {
                layer.render(in: context)
                
                let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: width, height: height, mipmapped: false)
                
                if let texture = device.makeTexture(descriptor: desc) {
                    texture.replace(region: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0, withBytes: data, bytesPerRow: context.bytesPerRow)
                    return texture
                }
            }
            
            return nil
        }
    }
    
    let texture = self.webView.takeTextureSnapshot(device: MTLCreateSystemDefaultDevice()!)
    

    ⏱️ Time: 250 ms (I really thought this would be faster, and maybe I’m doing something wrong, or perhaps Metal textures are hyper-efficient once created, but take a bit to create in the first place)

    UIView drawHierarchy()

    let rendererFormat = UIGraphicsImageRendererFormat.default()
    rendererFormat.opaque = true
    
    let renderer = UIGraphicsImageRenderer(size: webView.bounds.size, format: rendererFormat)
    
    let image = renderer.image { context in
        webView.drawHierarchy(in: webView.bounds, afterScreenUpdates: false)
    }
    

    ⏱️ Time: 150 ms

    JavaScript transfer

    What if we relied on JavaScript to do all the heavy lifting, and had the canvas write its contents into a base64 string, and then using WebKit messageHandlers, communicate that back to Swift?

    const video = document.querySelector('video');
    const canvas = document.createElement('canvas');
    
    canvas.width = video.videoWidth;
    canvas.height = video.videoHeight;
    
    var ctx = canvas.getContext('2d');
    ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    
    video.style.display = 'none';
    document.body.prepend(canvas);
    
    // 🟢 New code
    const imageData = canvas.toDataURL('image/jpeg'); 
    webkit.messageHandlers.imageHandler.postMessage(imageData);
    

    Then convert that to UIImage.

    func userContentController(_ userContentController: WKUserContentController, didReceive message: WKScriptMessage) {
        if message.name == "imageHandler", let dataURLString = message.body as? String {
            let image = convertToUIImage(dataURLString)
        }
    }
    
    private func convertToUIImage(_ dataURLString: String) -> UIImage {
        let dataURL = URL(string: dataURLString)!
        let data = try! Data(contentsOf: dataURL)
        return UIImage(data: data)!
    }
    

    ⏱️ Time: 130 ms

    WKWebView takeSnapshot()

    webView.takeSnapshot(with: nil) { image, error in
        self.image = image
    }
    

    ⏱️ Time: 70 ms

    Test results

    As you can see, the best of the best got to about 14 frames per second, which isn’t quite up to video playback level. Close, but not quite. I’m out of ideas.

    There were some interesting suggestions to use the WebCodecs VideoFrame API, or an OffscreenCanvas, but maybe due to my lack of experience with JavaScript I couldn’t get them meaningfully faster than the above JavaScript code with a normal canvas.

    If you have another idea, that you get working, I’d love to hear it.

    Why not just get the direct video file then?

    There’s two good answers to this question.

    First, the obvious one, Google/YouTube doesn’t like this. If you get the direct video URL, you can circumvent ads, which they’re not a fan of. I want Juno to happily exist as an amazing visionOS experience for YouTube, and Google requests you do so through the web player, and I think I can build an awesome app with that so that’s fine by me. 360° video is a small feature and I don’t think it’s worth getting in trouble over.

    Secondly, having access to the direct video still wouldn’t do you any good. Why? Codecs.

    Battle of the codecs

    Quick preamble. For years, pretty much all web video was H264. Easy. It’s a format that compresses video to a smaller file size while still keeping a good amount of detail. The act of uncompressing it is a little intensive (think, unzipping a big zip file), so computers have dedicated chips specifically built to do this mega fast. You can do it without these, purely in software, but it takes longer and consumes more power, so not ideal.

    Time went on, videos got bigger, and the search for something that compresses video even better than H264 started (and without licensing fees). The creatively named H265 (aka HEVC) was born, and Apple uses it a bunch (it still has licensing fees, however). Google went in a different direction and developed VP9 and made it royalty-free, though there were still concerns around patents. These formats can produce video files that are half the size of H264 but with the same visual quality.

    Apple added an efficient H265 hardware decoder to the iPhone 7 back in 2016, but to my knowledge VP9 decoding is done completely in software to this day and just relies on the raw power and efficiency of Apple’s CPUs.

    Google wanted to use their own VP9 format, and for 4K videos and above, only VP9 is available, no H264.

    Okay, and?

    So if we want to play back a 4K YouTube video on our iOS device, we’re looking at a VP9 video plain and simple. The catch is, you cannot play VP9 videos on iOS unless you’re granted a special entitlement by Apple. The YouTube app has this special entitlement, called com.apple.developer.coremedia.allow-alternate-video-decoder-selection, and so does Safari (and presumably other large video companies like Twitch, Netflix, etc.)

    But given that I cannot find any official documentation on that entitlement from Apple, safe to say it’s not an entitlement you or I are going to be able to get, so we cannot play back VP9 video, meaning we cannot play back 4K YouTube videos. Your guess is as good as mine why, maybe it’s very complex to implement if there’s indeed not a native hardware decoder, so Apple doesn’t like giving it out. So if you want 4K YouTube, you’re looking at either a web view or the YouTube app.

    Apple keynote slide for the M3 chip listing AV1 decode support

    Given that no one could agree on a video format, everyone went back to the drawing board, formed a collective group called the Alliance for Open Media (has Google, Apple, Samsung, Netflix, etc.), and authored the AV1 codec, hopefully creating the one video format to rule them all, with no licensing fees and hopefully no patent issues.

    Google uses this on YouTube, and Apple even added a hardware decoder for AV1 in their latest A17 and M3 chips. This means on my iPhone 15 Pro I can play back an AV1 video in iOS’ AVPlayer like butter.

    Buuuuttttt, the Apple Vision Pro ships with an M2, which has no such hardware decoder.

    Why it’s not a big loss

    So the tl;dr so far is YouTube uses the VP9 codec for 4K YouTube, and unless you’re special, you can’t playback VP9 video directly, which we need to do to be able to project it onto a sphere. Why not just do 1080p video?

    Because even 4K video looks bad in 360 degrees.

    Wait, what? Yeah, 4K looks incredible on the big TV in front of you, but you have to remember for 360° video, that same resolution is completely surrounding you. At any given point, the area you’re looking at is a small subset of the full resolution! In other words, the Vision Pro’s resolution is 4K per eye, meaning any area you look can show a 4K image, and when you stretch a 4K video all around you, everywhere you look is not 4K. Almost like the Vision Pro’s resolution per eye drops enormously. If you’re familiar with the pixels per degree (PPD) measurement for VR headsets, 4K immersive video has a quite bad PPD measurement.

    To test this further, I downloaded a 4K 360° video and projected it onto a sphere. The video is stretched from my feet to over my head. When I look straight, I’d say I’m looking at maybe 25% of the total height of the video. That means in a 4K video, which is 2,160 pixels tall, I can see maybe 25% of those pixels, or 540 pixels, so it looks a bit better than a 480p video but far from even 720p.

    Quick attempted visualization, showing the area you look at with a 4K TV:

    A TV in a living room in the center of your vision, labeled as 2160 pixels in height.

    Versus the area you look at a 4K 360° video:

    A visualization of being fully immersed in a 4K video, showing the center point that you're actually looking at only being maybe 540 pixels in height.

    So in short, it might be 4K, but it’s stretched over a far more massive area than you’re used to when you think about 4K. Imagine your 4K TV is the size of your wall and you’re watching it from a foot away, it’d be immersive, but much less sharp. That means in reality it only looks a bit better than 480p or so.

    So while it’d be cool to have 4K 360° video in Juno, I don’t think it looks good enough that it’s that compelling an experience.

    Enter 8K

    For the demo videos on the Apple Vision Pro (and the videos they show at the Apple Store), those are recorded in 8K, which gives you twice as many vertical and horizontal pixels to work with, and it levels up the experience a ton. Apple wasn’t flexing here, I’d say 8K is the minimum you want for a compelling, immersive video experience.

    A person in a yellow jacket walking across a rope spread over a valley, with a large camera on a crane recording her.
    Apple's impressive 8K immersive video recording setup

    And YouTube does have 8K, 360° videos! They’re rare since the hardware to record that isn’t cheap, but they are available. And pretty cool!

    But if I was a betting man, I doubt that’s ever coming to the first generation Vision Pro.

    Why? As mentioned 8K video on YouTube is only available in VP9 and AV1. The Vision Pro does not have a hardware AV1 decoder as it has an M2 not an M3, so it would have to do it in software. Testing on my M1 Pro MacBook Pro, which seems to Geekbench similarly to the Vision Pro, trying to playback 8K 360° video in Chrome is quite choppy and absolutely hammers my CPU. Apple’s chips may be powerful enough to grunt through 4K without a hardware decoder, but it doesn’t seem you can brute force 8K without a hardware decoder.

    Maybe I’m wrong or missing something, or Google works with Apple and re-encodes their 360° videos in a specific, Vision-Pro-only H265 format, but I’m not too hopeful that this generation of the product, without an M3, will have 8K 360° YouTube playback. That admittedly is an area the Quest 3 has the Vision Pro beat, in that its Qualcomm chip has an AV1 decoder.

    Does this mean the Vision Pro is a failure and we’ll never see 8K immersive video? Not at all, you could do it in a different codec, Apple has already shown it’s possible, I’m just not too hopeful for YouTube videos at this stage.

    In Conclusion

    As a developer, playing back the codec YouTube uses for its 4K video does not appear possible. It also doesn’t seem possible to snapshot frames fast enough to project it in realtime 3D. And even if it was, 4K video does not look too great unfortunately, ideally you want 8K, which seems even less likely.

    But dang, it was a fun weekend learning and trying things. If you manage to figure out something that I was unable to, tell me on Mastodon or Twitter, and I’ll name the 3D Theater in Juno after you. 😛 In the meantime, I’m going to finish up Juno 1.2!

    Additonal thanks to: Khaos Tian, Arthur Schiller, Drew Olbrich, Seb Vidal, Sanjeet Suhag, Eric Provencher, and others! ❤️


  • My little Apple Vision Pro stand

    February 19, 2024

    Three-quarter shot of the Vision Pro stand showing the Vision Pro standing vertically on a stand with a Pringle chip style top that it rests on, connected to the base with a walnut dowel, with the base holding the battery with an organizer for the cable

    I want somewhere to put my Vision Pro when not in use. Many people use the original box, and there’s beautiful stands that exist out there, but I was looking for something more compact and vertical so it would take up less room on my desk.

    So I opened Fusion 360 (which I am still very much learning), grabbed my calipers, and set out to design a little stand. There was interest when I showed the first version, so I set out to tidying it up a bit before making it available. Mainly the rod going up to the pringle part was a bit weak, so I ended up beefing up the diameter to 3/4". This also now means you can either 3D print the rod, or pick up a bespoke 3/4" rod of your own in a cool material, like walnut, maple, brass, steel, copper, etc. for pretty cheap from Home Depot or Amazon that is 215 mm in length. Then just use some superglue to bind them.

    Anywho, I quite like the end result, it’s compact, seems pretty secure, holds the battery and has a spot for the cable and rubber feet. Mine is printed in marble PLA with a $3 walnut dowel. It’s designed around my 23W light seal, I admittedly don’t know how it works with other sizes.

    Close up shot of the bottom of the stand holding a battery and having an area to keep the cables tidy

    You can download it on MakerWorld, I like them because I can send the designs to my Bambu printer really easily, and they give free filament with enough downloads, haha. You can also download just the top pringle if you want a lens shield for your bag or something.

    For a period of time, if you say the promo code “free free free for me me me” to your computer, the download will be available for free.

    I’m unfortunately not assembling/selling this myself, but if you don’t have a 3D printer (I quite like my Bambu P1S, if you’re looking for a suggestion), lots of local libraries have 3D printers nowadays, or you can submit it to sites like PCBWay to have them make it for you. I don’t quite have the expertise/time to manufacture and sell this myself for folks without a 3D printer, but if you the reader would like to on Etsy or something, go for it! (Just maybe link credit back, and if you make a bunch of money donate some to your local animal shelter and I’d be stoked.)


  • Juno 1.1

    February 14, 2024

    If you’re new, Juno is a visionOS app for YouTube!

    Juno’s initial launch blew my socks off. It was such a cool feeling to release an app on day one of the Apple Vision Pro’s launch, and having people be so excited about it and have such great feedback made it that much better. After coding like crazy all week to get it submitted, then driving 10 hours to New Hampshire from Canada, sitting down and reading all the comments in my hotel room made me really happy and even more excited to keep building onto the app for this cool new platform.

    It was a little brutal though realizing the long drive back I had to finish first before I could get much work done though, but I made it home just in time for a delightful snowstorm!

    After that, I got to work, and one week later Juno 1.1 is now available and addresses a bunch of the great feedback given and makes the app that much better. I’ve also got some fun stuff cooking for 1.2 and beyond, being able to actually use it on the device makes building it that much more fun. :)

    Here are the changes in 1.1:

    Quality options

    Video of Casey Neistat video playing in Juno with the new quality options overlaid

    Juno makes a best guess at what resolution to play back at, and while that’s still present it felt a bit better in the simulator than it did on the real device so a lot of folks requested manual control over the playback quality, and Juno 1.1 adds exactly that! Want 4K? Have at it! 240p aficionado? Got you covered too.

    Volume control

    Added quicker access to volume controls, so now you can tweak Juno’s volume right from the video player (you can still also change it by reaching up to the top of the device’s dial and then look at the volume icon to adjust). Right now this affects the video player’s volume specifically, not the whole system’s, but I’m looking into how to improve that.

    Drag and drop support

    If someone sends you a funny YouTube video, just drag and drop the link onto Juno to open it!

    Captions/subtitles

    Video of Sanago video playing in Juno in Korean with English subtitles showing

    Haven’t quite learned Korean yet but still want to know what the person is saying? Or simply want some auto-generated captions for your own language? Juno has you covered now.

    URL scheme

    If you want to open a video in Juno via Shortcuts or another app, simply change the https:// part of the URL to juno://, so for instance https://www.youtube.com/watch?v=dtp6b76pMak becomes juno://www.youtube.com/watch?v=dtp6b76pMak.

    Redesigned end of video screen

    When a video ends you can now quickly close the video or restart the video with a friendly little screen.

    Even faster video load times

    Found an area of my code that I was able to increase the efficiency of substantially and videos should load even faster now! 🏎️

    Video player UI improvements

    • You can now more easily jump between playback speeds
    • Improved video scrubbing control (volume control also uses it) with a new custom view that expands on selection called JunoSlider (planning to open source soon).
    • Corner radius is less dramatic during video playback so as to crop out less of the video
    • When video playback controls fade out, the system ‘grab bar’ now also fades out as it could be distracting to your immersion

    Less accidental input

    This is a funny one. When designing Juno I had a bunch of fun ideas to make video playback nicer, and most of them went over well!

    On the other side, I had the idea to make it so you could “scrub anywhere” on the video screen to go backward and forward in time, a beloved feature in Apollo which worked great in the visionOS simulator with a mouse and keyboard, but on the actual device when you’re pinching and looking around at all sorts of stuff it introduced a lot of accidental input and was much more of a pain than an actual feature. So I nixed this one, and instead made the video scrubber control at the bottom even better.

    Another feature similar to that was how on iOS if the video controls are hidden, and you tap the middle of the screen, it assumes you want to pause and does so. I added this to Juno, because it felt great in the simulator, but yeah, same situation where a lot of people thought this was also an annoying bug, and just looking at the pause button is already pretty fast, so not much need for this one.

    A better memory

    Juno now remembers your playback speed and volume settings from the previous video and automatically applies them.

    Bug fixes

    In addition to the above there’s also a bunch of bug fixes for things folks with keen eyes were kind enough to point out!

    • Fixed bug where you could be signed out temporarily if tapping on the Home button
    • Fixed bug where video controls (like the close button) could disappear sometimes, particularly after resizing
    • Fixed bug where video title could sometimes be wrong
    • Fixed bug where you couldn’t tap the video categories at the top of the home page
    • Fixed bug where video could get cropped down on resize
    • Fixed bug where video controls would still automatically fade away when paused
    • Fixed bug where the icon for going forward could be backward
    • Fixed bug where playback bar could be stretched
    • Fixed bug where video controls could persist after closing video
    • A bunch of other smaller fixes and tweaks

    Thank You ❤️

    The receipt to Juno really has been phenomenal and kind, I really thank you for that, it’s been a ton of fun developing and hearing what you think.

    If you’re enjoying the work, leaving a positive review on the App Store would be awesome, and if you haven’t checked it out yet, I’d love if you did and let me know what you think.

    I’ve got a bunch of more fun ideas cooking for Juno! Keep the feedback coming!

    juno.vision 🥽


  • Introducing Juno for Apple Vision Pro

    February 1, 2024

    Apple Vision Pro view of a living room with a floating window showing an iJustine video with the Juno app icon floating above it

    YouTube is probably one of the parts of the internet I consume the most, so I was more than a little sad when YouTube announced that they don’t have plans to build a visionOS app, and disabled the option to load the iPad app. This leaves you with Safari, and the website is okay, but definitely doesn’t feel like a visionOS app. Couple that with visionOS not having the option to add websites to your Home Screen, and YouTube isn’t that convenient on visionOS by default.

    Then I remembered for years my old app, Apollo, played back YouTube videos submitted to Reddit pretty well, and I developed a pretty good understanding of how YouTube worked. That sparked the idea to reuse some of Apollo’s code there and build a little YouTube client of my own for visionOS, and after a mad week of coding “Juno for YouTube” is born.

    How does it work… technically?

    Cleo Abram video where she's hanging out with a Boston Dynamics robot

    YouTube has a few different APIs.

    They have a “Data API” for fetching information (thumbnail, duration, etc.) for a video, that requires an API key, auditing, and you can only call so many times a day. This API doesn’t actually get you the video to play or anything, it’s purely for metadata, and for uploading.

    They have private/internal APIs that they get grumpy at you for using because you can circumvent ads. The goal with this app was to not make Google grumpy.

    Lastly, they have an embed API that’s pretty powerful, and is what I used in Apollo and now Juno. There’s no API keys, or limits to how many times a day you can call it, as it literally just loads the video in a webview, and provides JavaScript methods to interact with the video, such as pause, play, speed up, etc. It’s really nice, you can play YouTube videos back, and YouTube still gets to show ads (if the user doesn’t have YouTube Premium) and whatnot so no one is grumpy.

    This means you can build a fully native visionOS UI that then using JavaScript interacts with the underlying YouTube player, so you get the best of both worlds. Juno even supports detecting aspect ratios of the videos and will resize the window automatically, so ultra-wide 21:9 movie trailers are respected, as are nostalgic 4:3 uploads.

    The one downside is that occasionally you’ll get a creator who disabled playback for YouTube embeds. This is rare, especially with videos made in the last few years, but for those Juno will auto-detect that and just load up the normal video website page rather than the fancy player.

    What about the browsing itself?

    Searching 'Apple Vision Pro' in Juno, showing search results for day one reviews with a website feel combined with visionOS aesthetics

    At its core, Juno uses the YouTube website itself. No, not scraped. It presents the website as you would load it, but similar to how browser extensions work, it tweaks the theming of the site through CSS and JavaScript.

    That results in:

    • Tweaking backgrounds so the beautiful glassy look of visionOS shows through. As the great Serenity Caldwell once said, “Opaque windows can feel heavy and constricting, especially at large sizes. Whenever possible, prefer the glass material (which pulls light from people’s surroundings).
    • Increasing contrast so items are properly visible
    • Making buttons like the button to view your subscriptions native UI, and then loading the relevant portions of the website accordingly
    • You get your full recommendations, subscriptions and whatnot, just as you would on the normal YouTube site or app

    It was a lot of work tweaking the CSS to get the YouTube website to something that felt comfortable and at home on visionOS, but I’m really happy with how it turned out. Does it feel like a perfectly native visionOS app? Well no, but it’s a heck of a lot nicer than the website, and to be fair Google apps normally do their own thing rather than use iOS system UI, so not sure we’ll ever fully see that. :)

    Does it block ads?

    It doesn’t, I don’t think Google would like that, but if you have YouTube Premium you won’t see ads, just like the website. Honestly, YouTube Premium is like one of the most essential subscriptions for me, it’s so handy to never worry about ads and it’s pretty cool in that it also supports the creators substantially more than if you watched ads. So I dunno, if you can afford an expensive Apple Vision Pro, I’d really consider treating yourself to YouTube Premium!

    An MKBHD video that is positioned slightly off center in your peripheral vision

    Features

    • Beautiful translucent visionOS interface
    • Automatic aspect ratio detection
    • Speed up or slow down video
    • Native controls for video playback
    • Pinch-drag anywhere to scrub through video (an Apollo classic)
    • Double-pinch either side of the video to jump forward or back 10 seconds in time
    • Quick launch YouTube from Home Screen
    • Dim your surroundings to focus on the video
    • View your recommendations, subscriptions, playlists, etc.
    • Resizable (while maintaining correct aspect ratio)
    • Automatic quality selection, should scale up or down based on the size of your window all the way to 4K

    Features I’m looking into

    This was a bit of a mad dash to get finished in time for the Apple Vision Pro launch, so I’m hoping to add some more things with time.

    • Ability to see comments (I mean, they’re useful sometimes…)
    • Maybe select quality directly if interest is there
    • Caption controls (couldn’t quite get this working in time for 1.0)
    • More immersive environments
    • Multiview for multiple videos

    If there’s more you’d like to see, let me know!

    Can I give feedback?

    Yes please, I’d love that! I’ve only been able to develop this in the simulator, which obviously has its limitations, so once I get my hands on a device this Friday I’ll probably have a lot of thoughts on things I want to improve as well. That also means there will probably be some bugs here and there too. But I’d love to hear your experience and feedback with the app, so feel free to reach out to me on Mastodon or Twitter!

    Check it out!

    It’s available on the App Store for $5! A fun URL to find it is juno.vision No subscriptions or in-app purchases, just a one-time paid up front app like it’s 2008. I considered making it free, or like a buck, but it’s a premium platform, and I think paying a few bucks for a good app is something we should encourage if we want more developers building for this platform.

    I think the result is a really comfy way to browse YouTube on visionOS, and having a way to quickly launch YouTube right from your Home Screen is super convenient.

    I’m looking forward to doing more with it, and cheers to Matthew Skiles for designing the icon! He actually made some beautiful alternate icons as well, but those apparently aren’t supported in visionOS 1.0.

    Download it today!


  • Autonomous Standing Desk and Chair Review

    November 14, 2023

    Black cat sitting on red chair facing camera in front of white standing desk

    Autonomous was nice enough to send me one of both their Smart Desk Pro standing desks and ErgoChair Pro chairs in exchange for posting about them on Twitter, and I wanted to cover them in more detail on my blog as well so I could give my full thoughts on them for anyone in the market for a standing desk and chair. I care a lot about a quality, ergonomic desk set up and have tried a lot of different products, so I like to think I have a decent perspective here.

    The tl;dr is that they’re both really nice, though they have a few small things I would personally tweak.

    Small note: I’m Canadian but I’m putting all the prices in US dollars since most of my readers are American.

    The Desk

    Black cat walking across white standing desk top in high desk position

    I’m a programmer and care a lot about the place that I sit and stand at for an inordinate amount of time, so over the years I’ve tried a lot of different products to make my setup more comfortable, inviting, and just more pleasurable to use. That includes a few standing desks.

    My first standing desk was the cheapest one I could find on Amazon that had memory presents (trust me, you really want to spend the extra twenty bucks or whatever to not have to hold a button down for awhile and hope you hit the approximate height you like).

    I would not recommend this approach. It had a single motor in one of the legs that then twisted a rod that moved the other leg up. This results in a cheaper desk, but a much less reliable one, and I pressed the button one day to find my desk surface at a 45° angle when the motor had an issue getting the other side to cooperate. The desk had a comically short warranty so I was just out of luck and instead of just originally spending a bit more for a quality desk, I was now just doing that anyway but out the cost of an additional desk.

    I ended up buying a Jarvis standing desk by Fully after being recommended by a friend, and it’s been great. Both that desk and the Autonomous one are very similar and have dual motors with one per leg. At the time of writing, both have Black Friday sales but the Autonomous desk comes in a fair bit cheaper at $489.00 (with my code “22BFSELIG”) for the Autonomous and $599 for the Jarvis.

    I can’t speak to the Jarvis desktops (I bought mine just as the frame itself, as at the time shipping to Canada was very expensive), but I like the white laminate one from Autonomous a lot. I originally thought it would be one of those IKEA style ones that are filled with cardboard, but this sucker is heavy and has nice grommets to route your wires through. Funnily enough those I visually prefer the square corners of the IKEA cardboard edition versus the rounded Autonomous ones, so I ended up using it with the IKEA top.

    Autonomous also offers an even cheaper “SmartDesk Core” but it does not offer the height adjustability that the Pro offers so that model was a non-starter for me. In order to have an ergonomic wrist angle with my keyboard I like to put the desk quite low, so make sure you have an idea of where your ideal desk height is and that the desk you buy supports that. I was kinda surprised to see that the Autonomous desk, despite being listed as having a higher minimum height than the Jarvis, actually beats the Jarvis. At the lowest height, the Autonomous desk is 25" off the ground, while the Jarvis sits a quarter inch higher at 25.25".

    Measuring tape showing the Autonomous desk at 25 inches tall at its lowest position

    The Autonomous desk is also noticeably quieter than the Jarvis when in operation. Not to say the Jarvis is loud per se, but the Autonomous is a fair bit less noticeable when going up and down. My initial reaction was that maybe the Autonomous has weaker motors, but if I sit on both desks, both are still easily able to go up and down, and I weight 180 lbs, and I’m not sure anyone’s every day desk has more weight than that on it. They also go up and down at pretty much the exact same speed as far as I can tell.

    One area where I will give it to the Jarvis though is I slightly prefer their control system. I imagine it’s to prevent accidental input, but the Autonomous requires you hold down the buttons for a beat before they engage and start moving the desk, while the Jarvis is as soon as you touch it. This is probably a matter of preference, but it also manifests when you’re manually moving the desk with the up and down arrows, and when you reach your setting and release your finger, the Autonomous takes a second before it stops so unless you account for that you typically overshoot your target slightly. I’d love to see a dip switch or something on the controller that would allow you to control this functionality.

    Overall, especially with the price advantage, I’d go with the Autonomous desk personally. Pleased as punch, and now I have an extra desk set up in the corner of my office in case I… want to change it up I suppose? Now I finally have a use for that LG Ultrafine 5K that’s been sitting in my closet for two years!

    Oh, and get a standing desk mat. This is not negotiable, there’s a reason grocery store employees all use them. Standing for hours on a hard surface is not great for you, and will catch up to you eventually. It will make the standing desk a million times more inviting and comfortable to use, and they’re pretty inexpensive.

    The Chair

    Black cat sitting on red office chair in front of desk staring deeply into camera

    Standing desks always seem like a slam dunk, “duh” upgrade to people, but chairs seem unfortunately underappreciated. But they shouldn’t be! All the time you’re not standing, you’re sitting, and doing so in a good quality chair will pay dividends in the health of your body over the years.

    So naturally, I’ve tried quite a few desk chairs over the years. About seven years ago I went on a quest to get a good quality chair, and tried out all the “greats” of that era. I rented a Herman Miller Embody, a Herman Miller Aeron, a Steelcase Gesture, and a Steelcase Leap. While they were all quality chairs, the Steelcase Leap ended up being my favorite by a fair bit.

    Chairs are super personal, and the Leap just seemed to meld with my body the best, so I encourage you to try out chairs in person if possible, or order from a company that allows returns if you’re not satisfied. Seriously, the Herman Miller Embody from what people said sounded like it descended from the heavens, but I just wasn’t a big fan of how it fit against my back despite attempted adjustments. Autonomous seems to fit the bill for allowing to send back if you’re not a fan, though if buying during a sale like Black Friday I would contact them to see if that still applies, as they seem to have an asterisks for sale items.

    So basically, take the time to learn the adjustments on the chair, watch a YouTube video or two on proper ergonomics, and adjust the chair to fit you. Just through sheer combinations, the configuration that a chair comes in out of the box is likely not the one that best suits you!

    Long story short, the Autonomous ErgoChair Pro (they also have an ErgoChair Plus that seems closer to the Herman Miller Embody style) is a really comfy chair, and I love the red color I ordered it in. Also, between my Leap, my girlfriend’s office chair, and the Autonomous chair, my cat Ruby always chooses the Autonomous to sleep on, so that must mean something (I think it’s the fact it has the widest butt cushion area, which makes it feel a bit like you’re sitting on a throne).

    The price difference is pretty substantial too, at the time of writing the ErgoChair Pro is well under half the price of the Steelcase Leap (again, use that 22BFSELIG code to grab an additional 10% off for Black Friday).

    Black cat lying on red office chair staring at camera from above with a relaxed expression
    She won’t let me sit :(

    I’ve been using the Autonomous chair for about a week now at my desk, and while I think it’s a quality chair, I think I still have a slight preference for my Leap. This shouldn’t be super surprising at over double the price, but there’s a few things that push it slightly in the favor of the Leap for me.

    For one, the Autonomous chair is very adjustable, but I can’t quite get the lumbar support to a place I like versus the Leap. It’s just slightly more dramatically pushed in on the Autonomous versus the Steelcase, and that is adjustable, but I can’t dial it back enough (the Leap’s lumbar is a lot more adjustable).

    I also like on the Leap how when you recline, the seat cushion automatically slides forward a bit with you, so you don’t like slump off the chair as much. I also like that you can choose how far it can recline, so you can allow for a bit of a recline rather than going like super far back. Lastly, the height of the arms on the Leap can go lower than the Autonomous, which is just enough that I can push the Leap under my desk but not the Autonomous. You could always just not install the arms on the Autonomous though if you don’t use them.

    All that said, my girlfriend has your generic $70 Staples chair and I gave her the Autonomous to try for a bit, and she really, really liked it. Again, it’s more expensive, so not super surprising, but a quality chair is important. She also really liked that the Autonomous chair has a headrest versus my Steelcase one (you can buy one for the Steelcase, but it costs extra and isn’t nearly as customizable as the Autonomous one) and is now happily using it. See what I mean about personal preference?

    I really wish my Leap had the Autonomous’ adjustment for the angle of the seat, though. And I have to admit the Autonomous looks a fair bit cooler than the Steelcase Leap which kinda just looks like your run of the mill corporate America office chair.

    Overall

    I don’t get sent a lot of free stuff despite loving free stuff, so I was somewhat scared these were going to arrive and I might have to contact Autonomous and be like “ehhhhh, thanks for sending but not a fan”, but I’m delighted that they’re both very nice, price competitive options in the ergonomic desk setup space that I have no issue recommending. And if you do end up going with Autonomous, Black Friday is a great opportunity, and be sure to take 10% off on top of the existing sales with my coupon: 22BFSELIG. (I don’t get any kickback, but you might as well save some extra money! :p)

    Seriously, be it these options or something entirely different, do yourself a favor (if you have the means) and treat yourself to a quality desk and chair (and ideally keyboard, mouse, and monitor height). They can make a big difference in your health, especially compounded over years and years and years of heavy use if you’re in a desk-heavy job like programming, design, customer support, etc.

    There’s always that adage about treating yourself to quality things if they separate you from the ground, which is normally said in the context of shoes and a mattress, but given that many of us spend as much time at our desk as we do sleeping, I think a quality desk setup easily qualifies as well.