• Rivian R2 wishes as an R1 owner

    February 9, 2026

    Green Rivian R1S in the snow

    After 7 years with a Tesla Model 3, we picked up a gen 2 Rivian R1S in April of 2025. We still have the Model 3 as a second vehicle, but it’s been really cool experiencing a new electric vehicle from a very passionate new company.

    2026 is a really exciting time for Rivian, as in the first half of this year they’re launching their R2 vehicle - a smaller, less expensive SUV offering that should have a lot more mass-market appeal.

    With a bunch of jourrnalists getting previews of the vehicle today, I thought I’d share what I’m really hoping for in this new vehicle having experienced their existing vehicle for almost a year, and a Tesla for the better part of a decade. None of these are in any particular order.

    Better audio

    We sprung for the “Premium Audio” package in our R1S and… it’s not premium at all. Whatever system just came with our Model 3 sounds demonstrably better, from where it feels like the sound is coming from (much more expansive) to the bass, the R1S just feels a lot weaker. Honestly for the base audio system it would be probably decent, but for a “Premium Audio” system it just falls short. They have been making it better and better with softwware updates of all things over the course of our vehicle’s life, where it’s noticeable better now than it was at the beginning, a recent update in December basically sounds like they figured out how to turn on the subwoofer in the trunk a little bit, but it’s still just not that much.

    With the Tesla sometimes I’d park and finish out a song before leaving the car because it just sounded so good, have never had that happen in the Rivian so I hope they bring some of that experience to the R2 even through another “Premium Audio” package for those who care.

    No dual motor EPA shenanigans

    So our R1S is dual motor, meaning it has a motor for the front wheels and a separate motor for the back wheels, meaning you get AWD. Rivian allows you to get more range by only using the front motor, turning it into effectively a front wheel drive vehicle and as only one motor is active you get a bit better range (about 10%). Sounds great, right? Choose between more efficient front-wheel drive on trips, but just use AWD around town by default.

    Well, the devil is in the details. In order to be able to market this slightly improved range, the EPA requires Rivian to automatically revert back to this front wheel drive/higher efficiency mode after a few hours. Kinda like gas vehicles and how they turn off the engine at traffic lights, and if you disable that it just keeps turning itself back on. So even if it’s the winter and you’re like, “Dang, roads are a little dicey, I want to be in AWD”, if you park the car for awhile and forget to set it to AWD, it just reverts to front wheel drive. It’s like if your iPhone kept reverting to “Low Battery Mode” every 2 hours even after you keep toggling it off so Apple could advertise that model having 10% better battery life.

    Note: the vehicle isn’t hard-locked to front wheel drive in this mode, if it slips it alerts you that it’s switching over to AWD (I don’t want to wait for it to slip), and if you floor it on the highway for instance it’ll engage the rear motors for extra grunt, but because it has to link up the rear motors to an already moving system, you can sometimes get a kinda weird clunk feeling as the rear motor connects itself at speed which isn’t very satisfying. It almost feels like a wheel slip in the rain.

    This is maddening, and is only the case on their dual motor vehicles. For tri and quad motor vehicles, they just don’t market them as having that extra 10% range, so all the modes are actually sticky! If you say “front wheel drive mode” (they call it “Conserve”) it stays there indefinitely, if you say “AWD mode”, it stays there indefinitely.

    Is this dumb? Yes. Do I blame the EPA? Yes. Do I also blame Rivian? Also yes, they’re making this trade off to be able to market extra range.

    If Rivian does this same stuff for the R2 dual motor just so they can advertise a few extra miles, I really hope they have a $10 option you can configure when you order called “Give me less marketed range with no actual range decrease but have the vehicle actually do what I tell it to”, but maybe with a catchier title.

    Spare compact tire

    Our Tesla lacks a spare tire of any sort, instead electing to include a repair method and roadside assistance. A few years back a nasty pothole absolutely destroyed the sidewall of one of our tires on a drive home, and where it was sidewall damage it simply wasn’t repairable, so we had to call Tesla roadside assistance. Unfortuantely Tesla roadside assistance was absolutely useless, taking ages to respond and then ultimately not having any providers in the area, so we just ditched the car and got a ride home with a friend and dealt with it the next day.

    After that I was like “I do not want another vehicle without a spare tire on board”.

    With our R1S on the other hand, we had another unlucky event where we popped a tire (also the sidewall if you’d believe it, I have some great luck) and sure enough, since we elected to get a compact spare tire it was super easy to deal with, just grab the included jack, throw the compact spare on, then the next day we just slowly drove over to a shop for a new tire. Completely uneventful.

    I was worried with the smaller body that you wouldn’t be able to in the R2, but Jerry Rig Everything showed room for a compact spare in the sub trunk, nice!

    Empty space for a compact spare tire in the trunk of the Rivian R2

    Digital rear view mirror

    I don’t see this in any of the videos so it seems unlikely but I’m holding out hope it’s an option.

    Picture this: you approach your vehicle, since it sees it’s your phone it knows who you are, so it positions your seat, steering wheel, mirrors, Apple Music/Spotify account, and temperature preferences. You didn’t have to do a thing, the vehicle is just smart! Except… your spouse is 7 inches shorter than you so when you look in the rear view mirror you’re staring at the back seat.

    Is having to adjust your rear view mirror a big deal? No, but having the vehicle do everything else for you almost draws more attention to the final thing it’s missing that you still have to adjust every single time. This is a solved problem in inexpensive vehicles, just have a “digital rear view mirror” that requires no adjusting as it just shows a camera feed of the rear of the vehicle where the mirror is.

    Digital rear view mirror in a Toyota RAV4
    Digital rear view mirror in a RAV4 by u/MildSpaghettiSauce

    Rivian’s cameras are legitimately so good, that at night it’s easier to use them for side blind spot monitoring when you change lanes than the actual side mirrors, because they let in enough light that you can actually make out details through the pseudo “Night Mode” camera vision that you can’t with the mirrors alone, it’s wild. I want that for the rear view mirror too!

    Do you still prefer an analog mirror? That’s totally cool, all the vehicles that offer this let you just toggle back to a good ol’ reflective mirror. No harm no foul.

    V2H story

    Not a lot of people realize one of the most powerful part of EVs: they’re mobile powerstations that can (theoretically) power your entire home. Take a Tesla Model 3 for instance, it has a 75 kWh battery. Tesla also sells Powerwalls to help you back up your home, each Powerwall is around $6,000 and has 13.5 kWh of battery capacity. Yes, that means your Model 3 is the equivalent of more than 5 Powerwalls, or $33,000 in equivalent batteries.

    That’s nuts! Ever have a power outage? With the average home in Canada using about 30 kWh per day, that could power your house through potentially multiple days and genuinely save lives.

    My Rivian R1S is a lot better than my Tesla here in that it actually has normal, 120V AC outlets, but they can only output a measly 1.5 kW, so even powering a hungry kettle could result in the breaker tripping. Much better than the max 120W my Tesla can do through a 12V cigarette outlet (good god how is that the best they can do), but it’s still not enough output to power a house effectively.

    It’s kinda like being in a drought with a massive water tower, but water only comes out in drips. Better than nothing, but we need output speed too.

    The R1 can output much more by pulling DC energy directly from the battery through CCS protocols like the ISO 15118 standard, and sure enough despite Rivian not talnking about it, folks with these systems have been able to connect the R1S directly to their house with the appropriate cable and send up to 24 kW to power their entire home, crazy stuff, hope Rivian talks about this more in the future as clearly the vehicle supports it even if they are quiet about it.

    I’m kinda curious what the R2’s story is here. The CEO of Rivian said in an interview (~1:20:20) that the R1 and R2 both have bidirectional EV charging in the realm of 20 kW (which again we’re finally seeing folks be able to take advantage of in the R1 recently), but unlike being limited to 1.5 kW of AC output in the R1, the CEO says the R2 will be able to do “10 or 11 kW”.

    That’s massive versus the 1.5 kW the R1 can do, but I’m not sure that made it to production in the end? Doug Demuro showed a Rivian graphic with a V2L adapter (at 26:22) but it’s only listed as 2.4 kW. Still a lot better than 1.5, but a far cry fro the 10 or 11 kW that RJ Scaringe said earlier.

    V2L poster at Rivian for the R2 for an adapter claiming 2,400W

    Either way, talk more about this Rivian! This is one of the coolest parts about EVs!

    Better suspension

    The R1S has a fancy air suspension, so picture a bagpipe over each corner of the vehicle that lets it inflate or deflate to change the height of the vehicle and theoretically make for a cushier ride.

    I say theoretically because honestly I find the gen 2 R1S kinda rough suspension wise. Like, hitting the same potholes around where I live in the Tesla Model 3 (with a much simpler coil suspension, no bagpipes) versus the Rivian R1S I honestly find the Tesla makes me wince less. I thought it might have been the massive 22" wheels that came on the Rivian and the correspondingly small sidewall on the tire, but we switched to a 20" wheel for the winter with a much larger sidewall and it’s better but still not great.

    Would love to see Rivian tune this so that the R2 is a super smooth ride, I’ve heard the newer Tesla Model Ys are incredible here and also just have a coil suspension like the R2 will have.

    Faster charging

    Brands love to brag about maximum charge speeds, our Tesla for instance got a software update to enable 250 kW charging, which is super fast. To put that in perspective, most new homes in North America have 200 amp service going into their home, 250 kW is the equivalent of upgrading to 1,000 amp service, and then pouring every drop of that power into your car.

    But, while that top speed is impressive, it holds that for like, minutes maybe before crashing down to much slower charging speeds. Our R1S is no different here, with good peak speeds but it doesn’t exactly hold them super well, there’s been tons of reports that the cooling method for the batteries in the R1S is just underwhelming, where the Rivian R1 models have a two battery packs stacked on top of each other with a single liquid cooling plate in the middle (so only the top or the bottom of the battery is touching the cooling surface), and that single plate often seems a little underpowered for cooling such a massive pack quickly. Kinda like thermal throttling in laptops! We typically see around 45 minutes for the battery to charge 10-80% at fast chargers.

    The R2 on the other hand appears to be moving to a smarter cooling method where the cooling liquid flows almost through a ribbon, weaving along the sides of each cell in the battery pack, meaning a lot more cooling surface area.

    And indeed, this seems to have paid off, with a Rivian employee saying they’re now under 30 minutes for 10-80% on the R2. Class-leading? No, Hyundai is under 20 minutes, but a fair bit better.

    And for folks without EV experience, honestly, this is mostly a non-issue. 99.5% of your charging is done at home where speeds don’t matter (it just charges overnight like your phone), it’s only if you’re going on a bit of a roadtrip where this comes into play.

    Better USB-C

    Okay this isn’t a big deal, but the R1S has a ton of USB-C ports everywhere, like there must be close to a dozen, which is awesome, but if you try to charge a power bank or a laptop or something over USB-C it’s just… not very fast. I haven’t actually measured but I’m assuming they’re only doing 12V (at most) instead of at least 20V which most chargers are at nowadays, which I really hope they improve. My MacBook charges so slow.

    Better phone charger

    This is the one part of the Rivian that I’m like “how did this even make it out of the factory”. And if you’d believe it the phone charger in my vehicle is the second revision, so this is their attempt at fixing it somehow.

    Basically they have a flat little area near the arm rests where you can place one or two phones to have them wirelessly charge. Sounds fine, right?

    I haven’t measured it, but from experience I believe the charging “sweet spot” is approximately 4 atoms wide. If the car is in motion at all, it moves from those four atoms and tries super hard to charge it but ultimately cannot, and just making the phone get super hot and lose a bunch of battery life.

    One time we went camping and I was like “Okay, the vehicle is not moving, surely I can just set it here while I sleep and it’ll charge”. Nope, somehow even stationary the phone did not charge beyond 20% overnight and was super hot in the morning. What.

    They have managed to make a phone charger that is worse than not having one at all. I can’t even place my phone in the arm rest while I drive because it just cooks it, at least if nothing was there it could just be a storage location.

    No, mine is not broken, this is a common complaint from just about every Rivian owner, and they need to make this better for the R2. Just hot glue a MagSafe puck in there at minimum.

    Better Phone as a Key (PAAK)

    Rivian and Tesla do the (honestly very cool thing) where it uses your phone to detect your proximity to the car and lock/unlock it and recognize who the driver is to set preferences. It’s great, no having to carry around bulky car keys (don’t worry, there is a backup little credit card style key you carry in your pocket in case your phone dies or disappears).

    Rivian even recently updated theirs to use the first-party “Apple CarKey” functionality so you get bonuses like being able to unlock it for a few hours even after your phone’s battery died, and it uses “Ultra Wide Band” (UWB) so it can position your phone in relation to the car down to the centimeter. Tesla doesn’t do this and has their own proprietary thing that I think is based on Bluetooth but might use a bit of UWB on newer models (not on my car).

    But… Tesla’s is still much better. There’s two aspects of nailing good “phone as a key” support: firstly, unlocking the car as you approach (duh), and secondly, knowing who approached the car so you can set their preferences (seat, steering wheel, mirror positions, temperature preferences, music streaming account, etc.)

    • Unlocking: B+ for Rivian here, sometimes even with the recent update I have to stand by the door for a few seconds and be like “Um, hello” before it sees me there and unlocks. Same phone, walk over to my Tesla, always unlocks instantly even though the Rivian should have a massive advantage with UWB.
    • Identifying driver: D for Rivian here. Again, with UWB and centimeter-level positioning over the driver in relation to the vehicle Rivian should be able to know exactly who is approaching the driver door when my girlfriend and I (who both have Rivian keys) approach the vehicle, but 95% of the time when my girlfriend is with me (despite her always preferring to be passenger) it always sets her as the driver. Even weirder, sometimes she’ll yell out the window if I’m putting something in the frunk “Oh it actually recognized you this time!” with me set as the profile, but then when I sit down in the seat it reverts to her. What.

    Rivian needs to do better here for the R2. The pain is especially compounded by the fact that if you leave in a bit of a rush and somehow don’t realize the driver profile is wrong, Rivian won’t let you change your driver settings unless you slow down to under 3 mph, so you better get ready to pull over and stop the car if you need to adjust something. With Tesla you can always just swap profile and have your steering wheel, seat, mirrors, etc. move to where they should be, and that feels a lot safer than having to muck around with changing things via a touch screen or sit their tweaking controls on the side of the car seat.

    Smoother software

    Somehow despite my Model 3 being a 2018 model and probably running a Raspberry Pi compared to the hardware Rivian runs, moving around the OS in the Tesla still feels faster. With the Rivian there’s still lag just bouncing around screens with stuff sometimes taking a second or two to show up. I don’t get it.

    I really hope this improves on the R2, and thankfully it looks like it does, Doug DeMuro bounces around the R2’s UI here and it’s much faster than my gen 2 R1S with everything loading virtually instantly. Yay.

    CarPlay

    I’ve never had a vehicle with CarPlay, we rented a vehicle with it and it was kinda neat but Teslas and Rivians already have like every piece of software I’d want when interfacing with a vehcile (good, traffic-based maps and popular music streaming services), so with the exception of Overcast I don’t personally care about CarPlay at all.

    That being said, I know a ton of people do so I kinda hope Rivian looks into it even just as a little windowed experience because I think it would make a lot more people interested.

    Alexa is so bad

    About a month ago at their Autonomy Day Rivian previewed (among many other cool things) their new “Rivian Assistant”.

    This is sorely needed, their current system uses Alexa and. it. is. so. bad.

    With our Tesla, you can say “Navigate to Blah” and it will just automatically plot it and you’re off to the races.

    Best case with Rivian you’re like, “Alexa, navigate to Bob’s Cool Donuts in Dartmouth”, and Alexa is like “Would you like to navigate to Bob’s Cool Donuts, Dartmouth, Nova Scotia”, “Yes” where it repeats literally the only possible match back to you requiring confirmation every time instead of just… taking you there.

    A more typical case is “Alexa, navigate to Bob’s Donuts in Dartmouth”, “Would you like directions to Bob’s Donuts in Toledo, Ohio”. No Alexa, I want the one that’s a five minute drive, not the one a three day drive away. “Oh, okay, try being more specific next time”

    Better heat pump

    They do a decent job isolating the sound from the outside if you’re in the vehicle, but if it’s cold outside and you heat up your Rivian R1S, you can hear that sucker from a full block away in this super high pitched whine. I’ve understandably had multiple people ask “Is it okay?”.

    Better time placement

    The UX for knowing the current time in the cabin is not great. There is no clock on the main driver instrument display, and it’s in the furthest place possible on the middle display, so it’s not exactly easy to check. There’s tons of space on the driver display so I either wish they shoved it there, or if you put in directions, I wish it showed you the current time there. Right now it just says “ETA 2:58 PM” which if you’ve been driving for 45 minutes and kinda have lost track of time is not particularly helpful, that could be in 5 minutes or in 25 minutes for all I know.

    It looks a bit better on the R2 (seen in MKBHD’s video) where they’re placing it on the middle display a lot closer to the driver, but I still wish they just put it on the driver display.

    R2 UI showing the time on the left side of the main display

    Better second row release

    The R1S and many other modern vehicles have really stupid and downright dangerous second row releases, where the main door handles are electric, so it there’s an electrical issue and you need to get out of the vehicle, you obviously need a manual release. On my Tesla, there’s a (shocker) pull handle that is obvious and you can just pull to get out of the vehicle. Easy peasy (don’t worry Tesla has since changed to an inexplicably stupider, hidden design like Rivian)

    The R1 is incredibly stupid and you literally have to pop off trim in the rear door to get access to a cable you can yank. Yeah, good luck remembering that in dire circumstances, so I threw one of those window breakers in the back cubby.

    The R2 is better here, Jerry Rig Everything shows a little button patch you can pop off much easier to get to the cable, but still, just put the manual release that the front doors have, this is a basic safety feature and should not be complicated.

    Zack from Jerry Rig Everything's hand the rear hatch release thing for the manual door release cable

    Smaller

    This we know will be the case, with the Rivian R2 being about 2 feet shorter than the R1S.

    Size showing the R2 length at 4,715 mm versus 5,100 mm of the R1S

    The R1S is a properly large vehicle, which makes it very capable, but I dunno, I do find myself wishing it was a little smaller quite often, so I honestly think if the R2 is compelling enough I might be trading in my R1.

    … Does this sound negative?

    Reading it back a bit, this post sounds a wee bit negative even though the intention is just to talk about things I hope they improve.

    So just to be totally clear, I love the thing. It’s spacious, incredibly capable, looks great, has amazing range, and is just a lot of fun to drive with a ton of creature comforts that I miss every time I drive the Tesla. But there’s always room for improvement!

    I’m so excited

    I genuinely think Rivian is doing such cool things, and the company behind it seems to have a real passion for building cool products instead of just sitting on Twitter all day, so I’m super excited to see what this mass-market vehicle does for them and I’m hoping for the best.

    Looks like they’re launching in spring (I guessed summer!) with more pricing and configuration details March 12th.


  • CKSyncEngine questions and answers

    January 7, 2026

    Link Amiibo from Ocarina of Time, out of focus
    I didn't know what to put as a header so here are some iClouds (interesting clouds) in Maine

    I’ve had a lot of fun working with CKSyncEngine over the last month or so. I truly think it’s one of the best APIs Apple has built, and they’ve managed to take a very complex topic (cloud syncing) and make it very digestible and easy to integrate, without having to get into the weeds of CKOperation and whatnot like you had to in previous years.

    That being said, there’s a fair bit of work you still have to do (through no fault of Apple, it’s just that a lot of cloud sync work is application-specific), such as how to handle conflicts, how to integrate the CKRecords into your flow, responding to errors, etc.

    More interesting for a blog post, perhaps, I also had a fair few questions going into it (having very little CloudKit knowledge prior to this), and I thought I’d document those questions and the corresponding answers, as well as general insights I found to potentially save a future CKSyncEngine user some time, as I really couldn’t find easy answers to these anywhere (nor did modern LLMs have any idea).

    Apple sample project

    When in doubt, it’s always nice to see how Apple does things in their nicely published CKSyncEngine sample project: https://github.com/apple/sample-cloudkit-sync-engine

    Other awesome resources are Jordan Morgan’s blog post at Superwall, as well as the awesome work by Pointfree on their SQLiteData library which is open source and integrates CKSyncEngine as the syncing layer.

    These are great resources to understand how to implement CKSyncEngine which this article won’t be going over. I want to go over questions and edge cases you may encounter.

    Conflict resolution

    If you’ve used NSUbiquitousKeyValueStore (my only prior exposure to iCloud), CKSyncEngine is thankfully a lot smarter with conflict resolution (and by “conflict resolution” I mean “what happens when two devices try to save the same piece of data to the cloud”).

    With NSUbiquitousKeyValueStore if you had super valuable, years old data stored at key “blah” and you downloaded the app onto a new device and somehow set new data to the key “blah” (for instance, existing data hadn’t been downloaded yet) you would completely blow away the existing “blah” data, potentially jeopardizing years of data. Not great, which made me wary of storing much of value there without a ton of checks.

    CKSyncEngine is a lot smarter, where you’re dealing with CKRecords directly (more on that below) and thus can save metadata from them, so if you try to overwrite “blah” and your metadata is not up to date, CKSyncEngine will return a failure with the newest version of that data asking you what you want to do (overwrite your local data with the newer cloud version? tag your version with the newer cloud metadata and re-upload it so it works?), rather than blindly overwriting it. This makes it virtually impossible for a new device to come onto the scene and write “bad data” up, messing up existing data.

    (And serverRecordChanged is the error in failedRecordSaves you hook into!)

    It does beg the question though, “What do you do when there’s a conflict” and that’s what I alluded to earlier with Apple not being able to do everything for you, and you need to make some decisions here. For me, it depends on the data. For the vast majority of the data, always having the “server version win” is perfectly fine for my use case, so I overwrite the local version with the cloud version.

    But there’s some situations where I want to be a little choosier, for instance for integer that can never decrease in value (a good example would be how many times you’ve died in a video game), I have a system where it just chooses the higher value between the cloud version and the local version, and chooses that.

    You could write a long blog post just on this though, the important part is to choose the right system for your application. An app that creates a lot of singular data but rarely ever modifies it will need a dramatically different system than one that has a large, single body of data that is frequently being edited on multiple devices concurrently.

    And remember that CKSyncEngine being effectively a database means you can store a lot more information than the paltry 1,024 keys/1MB total limit that NSUbiquitousKeyValueStore allows, so you can create a much more robust system that’s appropriate to your app, but not necessarily any more complicated!

    Deletion conflict resolution

    Note that deletions just fire without any conflict resolution at the CKSyncEngine level; if you say to delete something with recordID "blah", CKSyncEngine will trust you know what you’re doing and just delete it (and not compare metadata or anything as it doesn’t even ask for it).

    CKRecord handling

    One of the only awkward parts of CKSyncEngine is that it operates through CKRecords, which are quite old a construct (much more Objective-C than Swift) you have to decide how to incorporate that into your existing data store. They’re basically a big old string dictionary of data with some metadata.

    For me, I mostly use GRDB (SQLite), and you have a nice, easy, hybrid solution where you have your local records with an extra column called something like cloudKitInfo, which is just the CKRecord distilled down into its pure informational metadata. This strips out all the CKRecord of large image and text data, and you’re basically just getting the bare essentials: the metadata fields like its record change tag for conflict resolution when you upload it

    If you don’t save these metadata fields you’re going to have a Very Bad Time™ when you go to upload, as your items being uploaded will have no matching metadata, so CloudKit will think you don’t have the most up to date version of that record and give you a conflict error every time.

    So my process generally looks like:

    When you get a new CKRecord from iCloud to sync with your local store, you extract all the data you care about from the dictionary fields (e.g.: item.postTitle = ckRecord["postTitle"]) into your local Swift object, and then extract the CloudKit specific metadata.

    extension CKRecord {
        func systemFieldsData() -> Data {
            let archiver = NSKeyedArchiver(requiringSecureCoding: true)
            encodeSystemFields(with: archiver)
            archiver.finishEncoding()
            return archiver.encodedData
        }
    }
    
    item.cloudKitInfo = ckRecord.systemFieldsData
    saveToSQLite(item)
    

    Then, when you go to upload an item after you changed it, you create a CKRecord by initializing it with your existing cloudKitInfo, then set the fields.

    let unarchiver = try NSKeyedUnarchiver(forReadingFrom: cloudKitSystemFields)
    unarchiver.requiresSecureCoding = true
    let restoredRecord = CKRecord(coder: unarchiver)
    
    restoredRecord["postTitle"] = myNewPostTitle
    

    This has the nice effect of letting you do basically everything in Swift, and just tacking on the necessary parts of the CKRecord to make the system work properly, without duplicating the entire CKRecord with all of the heavy data fields it may contain.

    Backward/forward compatibility

    One big worry I had was what if in version 1.0 of my app I have a structure like the following:

    struct IceCream {
        let name: String
        let lastEatenOn: Date
    }
    

    And then in version 1.1 of the app I add a new field:

    struct IceCream {
        let name: String
        let lastEatenOn: Date
        let tastiness: Float // New!
    }
    

    If a user has two devices, one that is updated to version 1.1 and another on 1.0, if I save a new IceCream on version 1.1 of the app with both a name of "chocolate" and a tastiness of 0.95, and sync that back to the device on version 1.0, where they eat the ice cream, then sync that back up, crucially that version of the app doesn’t know about the tastiness variable! So it might effectively sync back up IceCream(name: "chocolate", lastEatenOn: .now), and then when version 1.1 gets that, the tastiness is effectively lost data! Noooooo!

    How do we handle this? I dreamt up some complex solutions, but it turns out it’s incredibly easy thanks to the way CKRecord works. CKSyncEngine never documents this anywhere directly, but it obviously uses CloudKit under the hood, and CloudKit has dinstinct saving policies under CKModifyRecordsOperation.RecordSavePolicy documented here. And no matter what policy you choose (we don’t get a choice with CKSyncEngine) all of them detail the same behavior:

    CloudKit only saves the fields on CKRecords that you explicitly set. In other words, on version 1.0, when we create our CKRecord that represents our local data, it would look something like this:

    let ckRecord = // create CKRecord instance
    ckRecord["name"] = "chocolate"
    ckRecord["lastEatenOn"] = Date.now
    

    Note that we didn’t set tastiness at all, so when it goes up to iCloud, the tastiness field won’t be touched at all as it’s not present, it will just remain what it was. The only way the tastiness field would get reset is it we explicitly set it to nil.

    So when version 1.1 pulls down this change that version 1.0 made, the CKRecord it pulls down will still have the tastiness field intact. It’s essentially a factor that old versions of the app can only touch what fields they know exist, so no harm no foul.

    The only catch is you can’t go in the other direction: don’t delete tastiness in verson 1.2 of the app if earlier versions expect it to always exist. Give it some innocent default value.

    Enums are bad

    Enums are a finite set of values, so unless you’re positive that it will never change, don’t use enums in values meant to be cloud-synced.

    Why? Say you have this enum in version 1.0 of your app:

    enum IceCreamFlavor {
        case chocolate
        case strawberry
    }
    

    And in version 1.1 you add a new flavor:

    enum IceCreamFlavor {
        case chocolate
        case strawberry
        case vanilla // New!
    }
    

    What happens when version 1.0 has to decode IceCreamFlavor.vanilla? It will have no idea what that case is, and fail to decode, which you could just treat as a nil value, but if you then try to sync that nil value up, you risk overwriting the existing, good value with nil data (unlike the “Backward/forward compatibility case” above where it was a value stored in a different field, this is all operating under the same field/key). Bad.

    Instead, just store it as a string, and you could try to initialize an enum of known values with the string’s raw value if you desire.

    Multiple CKSyncEngine instances

    You have to be really careful with multiple instances of CKSyncEngines.

    At a high level in CloudKit you have CKContainer, which houses three CKDatabase instances: a private one (probably most commonly used), a public one, and a shared one.

    CKSyncEngine only allows one instance to manage an individual database, so that means it’s totally fine to have separate CKSyncEngine instances for a private and shared database. (Not for the public database however, as CKSyncEngine does not support public databases.)

    But you should not have multiple CKSyncEngine instances managing a single private database (I naively tried to do this to have a nice separation of concerns between different types of data in the app). The instances trip over each othre very quickly, with it not being clear which instance receives the sync events.

    You can get around this by creating multiple CKContainers, and having a CKSyncEngine per each one, but that feels messy and from what I understand not really how Apple intended containers to be used. Keeping everything under one instance isn’t too bad even with different kinds of data, as you can use different zones or record types to keep things sufficiently separated.

    Should you not call CKSyncEngine methods if the user isn’t signed into iCloud?

    Apple’s sample project still does! It seems harmless. From my testing, they get enqueued, but are never actioned upon (they never fail unlike normal CKRecordOperations, they just sit waiting forever), and then the queue is wiped when the user signs in.

    What happens if they sign out/sign in while your app is quit?

    No worries, you get the appropriate accountChange event on the next app launch.

    What is the difference between the account change notifications?

    You can either get signedIn, signedOut, or switchAccount.

    signedIn happens when they had no account and signed into one. signedOut happens when they had an existing account and signed out.

    switchAccounts is a “coalescing” one (you won’t get signedIn/signedOut and switchAccounts), where if your app is running/backgrounded you will get signedOut then signedIn if the user changes accounts, and you won’t get a switchAccounts notification. You only get switchAccounts if your app was quit and you relaunch the app at which point you’ll get the switchAccounts notification (but neither of the other two).

    How does state serialization work?

    Every time anything happens with CKSyncEngine you’re given a stateUpdate event, which you’re expected to persist to disk. This encodes the entirety of your CKSyncEngine’s state into a serialized value, so when the app launches the next time it can start off right where it was.

    It’s essentially a super charged git commit tag/checkpoint, so iOS knows where your CKSyncEngine exists in time (does it need to pull down any new changes?) and maintains any pending changes/deletions that might have not completed. If your app crashes part way through applying a change, your app simply will not have been issued the new “save checkpoint” notification, so the next time your app relaunches it will simply be restored to the last CKSyncEngine state you saved and retry.

    It also initializes synchronously, so if you had any pending items in your serialized state and you initialize CKSyncEngine, you can view your pending items immediately.

    Also note that if you initialize CKSyncEngine without any state serialization, you always get an “account change: signedIn” notification even if the user didn’t explicitly just sign into their iCloud account.

    CKSyncEngine re-initialization

    Per Apple’s sample project, re-initialize your CKSyncEngine (and delete any old state serialization) when either the user signs out, or switches accounts, but not when they transition from signed out to signed in, presumably because in the latter case there’s nothing really to invalidate in the CKSyncEngine when there is in the other two states.

    How does error handling work?

    Apple’s sample project indicates that there are a number of transient errors that CKSyncEngine handles automatically for you, like rate limiting issues, no internet connection, iCloud being down, etc. Nice!

    .networkFailure, .networkUnavailable, .zoneBusy, .serviceUnavailable, .notAuthenticated, .operationCancelled, .requestRateLimited
    

    In most of these cases it means the item just gets immediately added back to the pending items queue and CloudKit will pause the queue for a certain amount of time before retrying.

    Other ones, you do need to handle yourself, even if they seem like they should be automatic. A good example is quotaExceeded which you get if the user ran out of iCloud storage and you tried to save something.

    In this case Apple pauses the queue until the user frees up space or buys more (or after several minutes, specified by retryAfterSeconds) but does not add your item back, which seems weird to me, so just add it back. But you also can’t just add it back, as that would put it at the end of the queue, so you have to insert it back at the beginning of the queue so it’s the next item that will be retried (since it just failed). Only, there’s no API for this, so grab all the items in the queue, then empty the queue, then re-add all items back to the queue with your failed item at the front.

    For other failures, like quotaExceeded, they’re immediately removed from pending items once they fail, so if you want them to be retried you have to add them back manually.

    (Remember, the pending queue survives app restarts as it’s serialized to disk through state serialization, see above.)

    Embedding record types into record IDs

    A small point worth noting is that weirdly CKSyncEngine does not provide the actual recordType (only the string ID) when requesting the fully built CKRecords (which we need in order to tell which SQLite table the ID belongs to), so we can prepend the table to the beginning of the ID string, for instance IceCream:9arsnt89rna9stda5" so we can discern it at runtime.

    Let things be automatic

    You can manually pull/push to CKSyncEngine with fetchChanges() and sendChanges() but be careful. You can’t call these inside the CKSyncEngineDelegate methods per CKSyncEngineDelegate documentation:

    CKSyncEngine delivers events serially, which means the delegate doesn’t receive the next event until it finishes handling the current one. To maintain this ordering, don’t call sync engine methods from your delegate that may cause the engine to generate additional events. For example, don’t invoke fetchChanges(_:) or sendChanges(_:) from within handleEvent(_:syncEngine:).

    You can get stuck in weird, infinite loops. In practice I’ve found CKSyncEngine is really great at queuing up changes almost instantly without you having to babysit it and manually pull/fetch, just let it do its own thing and you should get great performance and not run into infinite loop bugs by trying to do things yourself.

    (Also note that the quote is kinda confusing, but it refers to those fetch and send changes methods specifically, adding new items to the queue within the delegate is fine and something Apple does in their sample project.)

    Zone deletion reasons

    When a “zone was deleted” event occurs, ensure you inspect the reason, of which there are 3:

    • deleted means we (the programmer) did it programmatically, commonly done as it’s the easiest/quickest way to delete all the records in a zone
    • purged means the user went through the iOS Settings app and wiped iCloud data for our app, which per Apple’s recommendation means we should delete all local data as well (otherwise it would just sync back up after they explicitly asked for it to be wiped, likely because they were running low on storage), and in the purged case we also delete our local system state serialization change token as it’s no longer valid (this is a full reset).
    • encryptedDataReset means the user had to reset their encrypted data during account recovery and per Apple’s recommendation we treat this as something the user likely did not want to have to do, so reset/delete our system state serialization token and reupload all their data to minimize data loss.

    Responding to account status changes

    CloudKit also has a NotificationCenter API for monitoring account changes (Notification.Name.CKAccountChanged) but you don’t really need this at all if you’re using CKSyncEngine, everything comes through the accountChange event that the NotificationCenter API would otherwise provide (just distilled down to signedIn, signedOut, or switchAccounts where the NotificationCenter API is a bit more granular). You can use both, but I haven’t found a need.

    Note that you should react appropriately to the kind of account change that occurred. For instance, following Apple’s sample project recommendation, if you receive a notification that they signedOut, that could mean they signed out of their iCloud account to give their sibling an old iPhone to play around with, and they may have private data they don’t want their sibling to have access to, so we should take this as a queue to delete local data (if they want the data back, when they sign back into iCloud it will be re-downloaded).

    Also note you can get the status of the user’s iCloud account at any point using try await CKContainer.default().accountStatus().

    Batch sizes

    CKRecords can be a max size of 1 MB, but also note that uploaded batches are limited to 1 MB in size, so if you enqueue 10 items to be uploaded, each 1 MB, iCloud will upload them in sequential, 1 MB batches (I sort of expected a single, 10 MB upload that included all the records).

    So that’s uploads, but conversely on the download size, iCloud is happy to download batches much larger than 1 MB in size! I’ve comfortably seen 100 MB+, which can happen when syncing an initial, large library.

    Conclusion

    If I think of any more notes I’ll add them, but hopefully a bunch of these things (that I had to find out through trial and error) save some other folks time when implementing CKSyncEngine!


  • The one software tweak the iPhone Air needs

    September 26, 2025

    One trick doctors hate that will make your iPhone… Sorry.

    I’ve been loving my iPhone Air. A week in I think it’s my favorite iPhone since the iPhone X.

    It has that indescribable feeling that the original MacBook Air had. That, “Wow, computers can be like this?” feeling that’s hard to quantify when you’re just looking at a spec sheet. Picking it up still makes me smile, and I love that the screen is bigger than any iPhone I’ve ever had, while the device overall feels smaller because it’s so thin.

    Even the battery has been surprisingly good, I feel like I have more at the end of the day than I did with my 15 Pro that I’m upgrading from, and Apple’s numbers seem to back this up, showing 23 hours of video playback on the 15 Pro and an increase to 27 on the Air.

    The only area I’ve kinda been disappointed on is the camera situation. No, not the telephoto, I really never used that personally. And not the ultrawide, for me that just felt too wide. But the ultrawide did allow for awesome macro capabilities that this iPhone Air is sorely lacking. At least currently.

    The problem

    Link Amiibo from Ocarina of Time, out of focus

    The iPhone Air’s minimum focus distance is just too short. Don’t get me wrong, it’s a hair better than my 15 Pro’s main sensor, allowing you to get maybe 15% closer to the subject, but it still does that annoying thing where when you want to take a picture of a small object and have it take up the full field of view, it often goes blurry right when you get it framed up.

    But then I was like, duh, it’s a 48 MP sensor, so I can zoom into 2x to get twice as close and still get a nice 12 MP photo. So you just pull the phone back a bit, hit 2x, and bam, you have a beautifully framed close shot, that’s actually in focus.

    Link Amiibo from Ocarina of Time, in focus

    An “easy” solution

    Look, I won’t claim camera sensor software is in any way easy, but all the other iPhones do an awesome job of detecting when the main sensor reached its minimum focus distance and then hopping over the the ultrawide to get a nice macro shot that’s still in focus.

    I’d love if Apple implemented similar software magic on the Air, where instead of having to manually hit that 2x when it gets blurry, Apple detected you hit the minimum focus distance and instructed you to “back up a bit” and then automatically made it in focus through cropping in on the main sensor.

    Would it change the world? No, but it’d take out a manual step I’m finding myself doing somewhat frequently.

    Will this level up your macro photography so that you can take pictures of the pollen on the leg of a bee? No, absolutely not. But getting about twice as close to your subject is a massive difference, especially since I find right now the Air’s minimum focus distance is just on the edge of where I want it to be when holding things close.

    Hopefully the brilliant folks at Halide, (Not Boring), or Obscura (listed in alphabetical order so I don’t have to rank my friends) can integrate something like this into their awesome apps if Apple themselves do not.


  • App Clip Local Experiences have consumed my day

    September 8, 2025

    Okay, I have to be doing something astronomically stupid, right? This should be working? I’m playing around with an App Clip and want to just run it on the device as a test, but no matter how I set things up nothing ever works. If you see what I’m doing wrong let me know and I’ll update this, and hopefully we can save someone else in the future a few hours of banging their head!

    Xcode

    App Clips require some setup in App Store Connect, so Apple provides a way when you’re just testing things to side step all that: App Clip Local Experiences

    I create a new sample project called IceCreamStore, which has the bundle ID com.christianselig.IceCreamStore. I then go to File > New > Target… > App Clip. I choose the Product Name “IceCreamClip”, and it automatically gets the bundle ID com.christianselig.IceCreamStore.Clip.

    I run both the main target and the app clip target on my iOS 18.6 phone and everything shows up perfectly, so let’s go onto actually configuring the Local Experience.

    Local Experience setup

    I go to Settings.app > Developer > App Clips Testing > Local Experiences > Register Local Experience, and then input the following details:

    Screenshot of iOS Settings app page for App Clip Local Experiences, with the inputted values available in text below
    • URL Prefix: https://boop.com/beep/
    • Bundle ID: com.christianselig.IceCreamStore.Clip (note thne Apple guide above says to use the Clip’s bundle ID, but I have tried both)
    • Title: Test1
    • Subtitle: Test2
    • Action: Open

    Upon saving, I then send myself a link to https://boop.com/beep/123 in iMessage, and upon tapping on it… nothing, it just tries to open that URL in Safari rather than in an App Clip (as it presumably should?). Same thing if I paste the URL into Safari’s address bar directly.

    I also tried generating an App Clip Code, but upon scanning it with my device I get “No usable data found”.

    Help

    What’s the deal here, what am I doing wrong? Is my App Store Connect account conspiring against me? I’ve tried on multiple iPhones on both iOS 18 and 26, and the incredible Matt Heaney (wrangler of App Clips) even kindly spent a bunch of time also pulling his hair out over this. We even tried to see if my devices were somehow banned from using App Clips, but nope, production apps using App Clips work fine!

    If you figure this out you would be my favorite person. 😛

    Update: solution. Sorta?

    Okay, seems the solution is two-fold:

    1. Make sure in addition to the main app target being installed, you manually switch to the App Clip target and install that itself directly too
    2. Generate an App Clip Code via the generator CLI (or a nice GUI) and scan that, rather than trying to open from URLs directly

    I will say I do love how Apple stuff 99% of the time does “just work”, but dang those times when it doesn’t I really wish they showed some diagnostics I could see as to why.


  • High quality, low filesize GIFs

    August 2, 2025

    A group of small kittens on a carpet

    While the GIF format is a little on the older side, it’s still a really handy format in 2025 for sharing short clips where an actual video file might have some compatibility issues.

    For instance, I find when you just want a short little video on your website, a GIF is still so handy versus a video, where some browsers will refuse to autoplay them, or seem like they’ll autoplay them fine until Low Battery Mode is activated, etc. With GIFs it’s just… easy, and sometimes easy is nice. They’re super handy for showing a screen recording of a cool feature in your app, for instance.

    What’s not nice is the size of GIFs. They have a reputation of being absolutely enormous from a filesize perspective, and they often are, but that doesn’t have to be the case, you can be smart about your GIF and optimize its size substantially. Over the years I’ve tried lots of little apps that promise to help to no avail, so I’ve developed a little script to make this easier that I thought might be helpful to share.

    Naive approach

    Let’s show where GIFs get that bad reputation so we can have a baseline.

    We’ll use trusty ol’ ffmpeg (in the age of LLMs it is a super handy utility), which if you don’t have already you can install via brew install ffmpeg. It’s a handy (and in my opinion downright essential) tool for doing just about anything with video.

    For a video we’ll use this cute video of some kittens I took at our local animal shelter:

    It’s 4K, 30 FPS, 5 seconds long, and thanks to its H265/HEVC video encoding it’s only 19.5 MB. Not bad!

    Let’s just chuck it into ffmpeg and tell it to output a GIF and see how it does.

    ffmpeg -i kitties.mp4 kitties.gif
    

    Okay, let that run and- oh no.

    A screenshot of macOS Finder showing the GIF at 409.4MB

    For your sake I’m not even going to attach the GIF here in case folks are on mobile data, but the resulting file is 409.4MB. Almost half a gigabyte for a 5 second GIF of kittens. We gotta do better.

    Better

    We can do better.

    Let’s throw a bunch of confusing parameters at ffmpeg (that I’ll break down) to make this a bit more manageable.

    ffmpeg -i kitties.mp4 -filter_complex "fps=24,scale=iw*sar:ih,scale=1000:-1,split[a][b];[a]palettegen[p];[b][p]paletteuse=dither=floyd_steinberg" kitties2.gif
    

    Okay, lot going on here, let’s break it down.

    • fps=24: we’re dropping down to 24 fps from 30 fps, many folks upload full YouTube videos at this framerate so it’s more than acceptable for a GIF.
    • scale=iw*sar:ih: sometimes video files have weird situations where the aspect ratio of each pixel isn’t square, which GIFs don’t like, so this is just a correction step so that doesn’t potentially trip us up
    • scale=1000:-1: we don’t need our GIF to be 4K, and I’ve found 1,000 pixels across to be a great middle ground for GIFs. The -1 at the end just means scale the height to the appropriate value rather than us having to do the math ourselves.
    • The rest is related to the color palette, we’re telling ffmpeg to scan the entire video to build an appropriate color palette up, and to use the Floyd-Steinberg algorithm to do so. I find this algorithm gives us the highest quality output (which is also handy for compressing it more in further steps)

    This gives us a dang good looking GIF that clocks in at about 10% the file size at 45.8MB.

    Link to GIF in lieu of embedding directly

    Nice!

    Even better

    ffmpeg is great, but where it’s geared toward videos it doesn’t do every GIF optimization imaginable. You could stop where we are and be happy, but if you want to shave off a few more megabytes, we can leverage gifsicle, a small command line utility that is built around optimizing GIFs.

    We’ll install gifsicle via brew install gifsicle and throw our GIF into it with the following:

    gifsicle -O3 --lossy=65 --gamma=1.2 kitties2.gif -o kitties3.gif
    

    So what’s going on here?

    • O3 is essentially gifsicle’s most efficient mode, doing fancy things like delta frames so changes between frames are stored rather than each frame separately
    • lossy=65 defines the level of compression, 65 has been a good middle ground for me (200 I believe is the highest compression level)
    • gamma=1.2 is a bit confusing, but essentially the gamma controls how the lossy parameter reacts to (and thus compresses) colors. 1 will allow it to be quite aggressive with colors, while 2.2 (the default) is much less so. Through trial and error I’ve found 1.2 causes nice compression without much of a loss in quality

    The resulting GIF is now 23.8MB, shaving a nice additional 22MB off, so we’re now at a meager 5% of our original filesize.

    Three kittens playing with a pink feather toy on a carpet

    That’s a lot closer to the 4K, 20MB input, so for a GIF I’ll call that a win. And for something like a simpler screen recording it’ll be even smaller!

    Make it easy

    Rather than having to remember that command or come back here and copy paste it all the time, add the following to your ~/.zshrc (or create it if you don’t have one already):

    gifify() {
        # Defaults
        local lossy=65 fps=24 width=1000 gamma=1.2
    
        while [[ $# -gt 0 ]]; do
            case "$1" in
                --lossy) lossy="$2"; shift 2 ;;
                --fps)   fps="$2";   shift 2 ;;
                --width) width="$2"; shift 2 ;;
                --gamma) gamma="$2"; shift 2 ;; 
                --help|-h)
                  echo "Usage: gifify [--lossy N] [--fps N] [--width N] [--gamma VAL] <input video> <output.gif>"
                  echo "Defaults: --lossy 65  --fps 24  --width 1000  --gamma 1.2"
                  return 0
                  ;;
                --) shift; break ;;
                --*) echo "Unknown option: $1" >&2; return 2 ;;
                *)  break ;;
            esac
        done
    
        if (( $# < 2 )); then
            echo "Usage: gifify [--lossy N] [--fps N] [--width N] [--gamma VAL] <input video> <output.gif>" >&2
            return 2
        fi
    
        local in="$1"
        local out="$2"
        local tmp="$(mktemp -t gifify.XXXXXX).gif"
        trap 'rm -f "$tmp"' EXIT
    
        echo "[gifify] FFmpeg: starting encode → '$in' → temp GIF (fps=${fps}, width=${width})…"
        if ! ffmpeg -hide_banner -loglevel error -nostats -y -i "$in" \
            -filter_complex "fps=${fps},scale=iw*sar:ih,scale=${width}:-1,split[a][b];[a]palettegen[p];[b][p]paletteuse=dither=floyd_steinberg" \
            "$tmp"
        then
            echo "[gifify] FFmpeg failed." >&2
            return 1
        fi
    
        echo "[gifify] FFmpeg: done. Starting gifsicle (lossy=${lossy}, gamma=${gamma})…"
        if ! gifsicle -O3 --gamma="$gamma" --lossy="$lossy" "$tmp" -o "$out"; then
            echo "[gifify] gifsicle failed." >&2
            return 1
        fi
    
        local bytes
        bytes=$(stat -f%z "$out" 2>/dev/null || stat -c%s "$out" 2>/dev/null || echo "")
        if [[ -n "$bytes" ]]; then
            local mb
            mb=$(LC_ALL=C printf "%.2f" $(( bytes / 1000000.0 )))
            echo "[gifify] gifsicle: done. Wrote '$out' (${mb} MB)."
        else
            echo "[gifify] gifsicle: done. Wrote '$out'."
        fi
    }
    

    This will allow you to easily call it as either gifify <input-filename.mp4> <output-gifname.gif> and default to the values above, or if you want to tweak them you can use any optional parameters with gifify --fps 30 --gamma 1.8 --width 600 --lossy 100 <input-filename.mp4> <output-gifname.gif>.

    For instance:

    # Using default values we used above
    gifify cats.mp4 cats.gif
    
    # Changing the lossiness and gamma
    gifify --lossy 30 --gamma 2.2 cats.mp4 cats.gif
    

    Much easier.

    May your GIFs be beautiful and efficient.