The Humane Ai Pin has been announced, a phone alternative trying its best to not be a phone in any way. Humane famously spearheaded by ex-Apple luminaries Imran Chaudhri (with large amounts of the iPhone and multi-touch user experience to his name), Bethany Bongiorno (a Director of Software Engineering from the launch of the original iPad) and counting among its ranks Ken Kocienda (part of the initial Safari/WebKit team and designer of the first software keyboard and typing autocorrect), I'm finding myself wondering what I'm missing.
There's the ambition, the philosophical thrust behind the product itself: people bewitched by apps, addicted to constant impulses be they doom-scrolling or drip-feeding entertainment. The desire to break free of the neck-craning prison of the pocket rectangle is understandable. (It's also been used as a siren song for both Windows Phone and Apple Watch before.)
There's the technological moment in time. AI personal assistants have been available for years and recent breakthroughs in some AI technology means this is probably the first time this type of device could do what it could do with convincing accuracy — remember things, relate them to the current location, time, context — well enough to be basically the only interface, the only contact surface. Not a parlor trick which you can ignore if you want; a small projected readout in your palm aside, talking to it, having it understand what you mean and doing it is it.
The Humane Ai Pin didn't happen by chance and was not lazily extracted from between the couch cushions. A lot of talented people spent a lot of time at it, clearly chasing a deep vision.
So why does it seem so terribly, undeniably off?
There is a precipitous cliff for anything beyond "talking to the magical AI", where viewing photos and videos and managing settings all happen by going to a ".Center" site in a browser. The product site features food delivery and messaging between friends, two things that are well handled by apps today and that look dreadful to handle via voice entry or the projected palm interface, more fit for haikus than menus. But the "cosmos" operating system is leaning into this, supposed to be free of all types of apps. So much for growing pains.
I am not the first to react strongly to this, but I am probably uncommon in my intense dislike for personal assistant AIs, a dislike that obviously flares to new heights in a product so heavily focused on them. The Humane site harps on privacy and trust, but what is private about being forced to live your life out loud; to not be able to jot a thought down silently? Were these things even discussed on a fundamental level during the considerable ideation, or was anyone just seen as the bearer of the bad culture, steeped in the musky scent of old magics?
A little experience can be a dangerous thing. Having gone through a world-changing evolution of how most people interact with personal technology, I understand if people think "in the beginning, they will laugh at you and say that the keyboard will not work until the screen can deliver convincing tactile feedback echoing physical buttons, but look at what happened; the world adapted and we won". I understand if some of the people involved feel a strange mix of regret of what this new technology, and everything that happened in its wake, has wrought, as well as the professional and curious imperative to do it again by taking the next leap, to unwind the next impediment to the machine just knowing what you mean to do by interacting with it.
If walking around in the world but looking at a screen because you're reading something is being absorbed by something else and not being present, then tapping a pocket square and talking to a virtual assistant about the same thing you would accomplish if you had a screen is also not being present.
In the scale of things, what exists is a technological achievement (and taking the recent progress and extrapolating five more years, might be even more so). There's just no compelling reason for anyone to throw the things that already exist overboard to use it and only it. (Maybe if you believe so strictly in the mission of Humane that aligning with it overrides every practical concern and dresses the contortions up in adherence to a more enlightened existence.)
This doesn't sound like an insurmountable issue to me; for it to be a purposeful, focused device, used in addition to other things and also free from the burden, whether catered to or not, of having to be all the other things. But I'm not sure the people who would found Humane would want to go down that road.