A Product Designer's Analysis of Humane's AI Pin (Part 1)
AI wearables are here and Humane has built a one-of-a-kind product seeking to challenge the status quo of personal devices and set an example for a new type of ambient AI assistant in our lives. Humane’s AI pin is powered by Chat GPT for upwards of 700$ along with a 24$ monthly subscription. I’m not interested in the pricing debacle but what I am interested in is how it opens up different avenues of interactions and experiences that go beyond the old screen and stylus setups we own. I will critique the choices of its UX, product-market fit, and how designers like you and me can and should prepare for a future where designs and interactions like this could become the new norm.
Prompt: A striking comic-style illustration of a man walking down the bustling streets of New York, wearing a small, black square AI pin on his chest. The pin projects a holographic face, illuminated by high-tech blue lasers, of another person who is interacting with the wearer. The man is trying to communicate with the projected face, but his mute feature prevents him from doing so. Passersby gawk at the spectacle, displaying a mix of awe and confusion. The background reveals the iconic New York City skyline, with towering buildings and yellow taxis zooming by.
Virtue Signalling
Every smartphone facilitates experiences in 3 categories:
entertainment
communication
tools.
Humane wants us to embrace the hypothetical life without the screen and social media frying our dopamine receptors- which is a type of virtue signaling. You want to be a person who intentionally and timely disconnects from the reel world and appreciates the real world. But that is not how the worldly infrastructure is set up. Businesses want your attention, other people want your attention, hell even you want other people’s attention. In a world that not only runs but thrives on attention, removing a channel like that on a device seeking to compete with smartphones that do all the things this pin does and more, seems like an uphill endeavor.
Enter VUIs
Voice user interfaces have been here for a while and they’ve had their due course of problems and improvements throughout different gens. What’s interesting about Humane’s AI pin is that all of its VUI can seemingly be projected on the palm of your hand instantaneously to curate a very sci-fi, high-tech experience. I do believe this medium of interaction can change the way we communicate to tech but is the experience there yet? I don’t think so. Will the scale of the improvements follow through quickly? I hope so, for Humane’s sake.
As a product designer, a good and foundational place to start would be building a quotient for curating seamless chatbots. Conversational design can be extremely tricky as subjective layers are being added with every phrase and command so understanding the nuance of “localizing” the conversation would help the experience feel more Humane. Another issue with VUIs powered by some type of AI is going to be latency. My research tells me that natural language processing at this rate layered with compute and API calls to generate an output is going to be slow. Slow in comparison to what we’re used to with smartphones and other tech in our lives. This will be solved with time, but a good VUI experience can be curated by triggering voice-based micro-interactions.
Interaction Heuristics
Despite being a one-of-kind physical design, there are no excuses for leveraging the existing heuristics to ensure users don’t have a steep learning curve. After looking at a few walkthrough videos by Sam Cheffer, I am convinced that despite good amounts of status visibility, the various triggers for clicking pictures and videos will isolate and confuse the users simply because there’s no immediate preview. The video said there’s a preview available on .center which I couldn’t understand so I’m simply assuming that you would still need your phone or other device to preview it. I understand this is probably not a design flaw but a constraint of tech.
In part 2 of this article, I will get deeper into decoding the interactions and visual design of the product’s interface to better foresight for the type of design skills a product designer might need in the coming years.