Even as technology has moved forward with increasing speed through the years and opened up countless ways to improve our world, the plain fact is that for most of that time the interactions between users and tools have been primitive and one-way.
The keyboard and screens have worked adequately for generations to allow exponential communication and productivity, but the software behind it all has for the most part remained in a static state once all the development steps are completed and the final product is shipped. That meant while users could react and adapt to what was in front of them, digital products didn’t have the capability to gauge what was happening to the user and behave in new ways to be more inviting.
The move to change this one-way interaction comes through humanizing technology with the use of artificial intelligence that can recognize and take on human qualities. This prospect is becoming more realistic by the day, with researchers in Bangalore recently touting the creation of AI technology that can mimic the human brain’s processes.
In the process, humanized technologies become more pleasant and easy to use, and unseen friction points that remained in place through the design and launch process can be identified and minimized to operate in the most comfortable way possible for each unique user.
Making technology “joyful to use,” along with being trustworthy, humanlike, and able to do the heavy lifting and drudge work that people dislike are just some of the boxes that must be checked, based on the thinking of a Google confab in Berlin that looked into the details of humanizing artificial intelligence.
Andrew Tull sat with Harsha Bellur, the EVP and CIO at James Avery Artisan Jewelry, and explored this topic further, “We have to understand how people have evolved and where they are coming from. The new generation and old. For us, we have generational customers…As much as we want to think about exciting technology like biometric payments, apple pay. etc. we have to first understand we have a core set of customers who are most comfortable writing a check. And so as an organization, and a tech professional, I have to cater to all of our customers. Is it challenging? Is it great if I just have everyone do it one way? YES! But then, its organization-centric tech, and not People Driven Tech.“
Tune in to Episode 1 of Humanizing Software, and listen in every Tuesday at 11 AM CT here.
This goal – taking automated one-way interfaces and making them personable and human-like – can become realized only via a research-intensive process that involves hundreds of user interface and usability experiments conducted with diverse groups of potential users.
This is how we learn about the emotional journey users take while they are using technology tools to solve a problem.
It is also how unseen pain points become apparent, how new reactions to changes can be empirically studied, and how human-like intelligence can start to take shape and usefully start to take cues from users about their exact needs.
On that frontier, at Columbia University the discovery of a technology that can learn and predict human actions based on watching videos of sporting events, sitcoms and other content formats suggests the possibility of software companies soon building capabilities to interpret, learn and anticipate user behaviors.
Finding Balance
It’s important to keep in mind that there are practical limits to how human a technology can become before users become uneasy and be turned off by the interactions.
Imagine if an Alexa module went from being an easy source of information and content and crossed the line to begin having conversations and taking on humanlike personality traits. This could create confusing interactions for children, bring about suspicions between family members about what Alexa knows and is sharing about them, and bring about a general feeling of dissonance when it comes to what constitutes an authentic human interaction.
If Alexa, Siri, chatbots and other technologies stay on the right side of the man/machine divide then we can gradually arrive at having tools and devices that don’t just operate intuitively but feel like extensions of ourselves.
Finding that balance takes patience and a general agreement among technologists of all kinds to consider the big question of “should we?” in tandem with “can we?” when it comes to reaching new levels of humanization. It’s difficult to establish wide-reaching rules and checklists companies should test their new innovations against, but building in steps to pause and objectively examine the possible effects to users from humanization advancements seems like a minimum requirement.
Creators who strike the right balance and build responsible humanization into their offerings are likely to experience something akin to an evolutionary step in their business. A clear divide will emerge between humanizing-focused companies and those that may have impressive technical resources in talent, hardware and design but don’t point them toward bringing empathy, personality and creativity to their products.
Software and devices that can build familiarity with users – akin to a relationship, but with the appropriate boundaries – will quickly be seen as the preferred format for all digital products. The “one-way” static format that has little if any deep choice architecture or ability to create a whole new reality will become a relic, like DOS in the age of Windows.
The advantages of proper humanization come not just from customer preferences that will lead to more use time and purchases. With technology that is elegantly designed to achieve the highest possible level of interaction with a user, more data about preferences and habits will be available than ever before.
During a 2019 talk at The World Innovation Network, Jesus Mantas, IBM’s senior managing partner for strategy and innovation, said unlocking the human side of technology is perhaps the most noble and needed engineering problem to be solved.
“Humanizing technology, and creating systems where technology and people work better together is not only morally right or something we should do… it is just a better way to solve an engineering problem and get things done,” he said. “The people that get very good at engineering for empathy are actually going to solve problems faster, better and are going to thrive while others continue to struggle.”
That data can be parsed at an individual level and in aggregated masses to learn more than ever about how products can best serve customers, and what new innovations are most likely to be well received and commercially successful.
There’s a double-edged sword dynamic to that kind of data access, however, with users’ privacy and identity concerns being absolutely paramount. Much like the “should we?” concerns from earlier, security and privacy safeguards as well as ethical considerations need to become top-level components of these technologies.
Much like extensive testing in the earliest stages is necessary to learn as much as possible about how users respond to software and other technology, the same amount of attention and rigor must be applied to learning how those tools could be hacked or weaponized against the user. To neglect the security and privacy side of humanization would be to invite bad actors to erase whatever trust and optimism inherent in these advancements.