Technology starts at the precise point in which our bodies and minds end. Its sole purpose is to augment our capabilities beyond what our physical limitations enable. 

Imagine one of your ancient ancestors is sitting on a rock looking at a nut, which is, as expected, tough to crack. She tries to pry it open with her bare hands, but the hard shell is unyielding. Then, a profound moment in history suddenly takes place: Your ancient ancestor gets up off of the rock, smashes the nut against it, and voila! Humankind has just evolved to using tools and from there to using technologies.  

For thousands of years, this is precisely how technology has come into existence—out of need and unsolved “use cases.” Can’t get that nail into the wall? Enter the hammer! Can’t chop that wood? Here is the ax! Tools have helped humans build the pyramids, plant and cultivate crops, spread the written word, travel to the moon, and so much more. 

Technological tools have been around for so long, and we’ve gotten so used to using them to solve both physical and mental problems, that they have become a natural extension of our bodies. The duality between humans and technology is beautiful; time and time again, the combination of humans and technology has proven that it can help us perform better.

But what happens when new technologies appear

When a new technology arrives, our natural tendency is to find use cases for which it is capable of solving. The risk is that we also attach new technologies to use cases, which it cannot yet deliver. 

The blockchain is an excellent example of a technology that started closer to a proof of concept but was promoted as a mature tool that could solve, well, everything. In reality, the technology, as promising as it may be, is still a far cry from maturity. While some use cases are certainly possible, blockchain technology has yet to become a streamlined extension augmenting our capabilities. The risk with new technologies and techno-optimism is when we attribute more to the technology than it can do.

So, does this risk of overestimating what technology can do means we should shy away from deploying new technologies? That we should break away from the human and technology duality that has served humankind for so long? No, not at all. We should focus on a “separation of responsibilities.”

Separating the responsibilies between humans and technology

Technology, and in our day and age, digital technology specifically, is excellent at doing some things. Humans, on the other hand, are great at doing other things. The beauty is when the two come together, and technology acts as a natural extension to human capabilities. There is nothing new in this thinking. A hammer is excellent at hammering things; the human fist, not so much. But a hammer can’t strike anything on its own. It needs a human to realize it is required, to provide it with force and momentum, and to guide it to the target.

My familiarity with the plan of the city I just landed in is limited, but Waze can act as an extension of my knowledge and help bridge this gap. I need to decide where to go and when, and I am still responsible for moving myself around, be it on foot or in a rented car. The technology takes care of how to do so in the least amount of time.

Professor Ken Goldberg, the Chair in Engineering of UC Berkeley, has defined this separation of responsibilities well: 

“The things that computers are best at calculation, precision, and objectivity, are distinct from the qualities that belong primarily to humans—purpose, passion, and understanding.”

Separation Of Responsibilities means that when we evaluate new technology, when we design new products or when we consider the business impact of new technologies, we should consider what a human should do and what a machine should do. Striking the right balance will prevent overextending or undershooting the capabilities of human and machine collaboration:


When evaluating new technology and its potential impact, assess how the “separation of responsibilities” will change over time. Initially, humans will still need to do the bulk of the work (be it physical or mental). As the tech evolves, more responsibilities will shift towards the machine. Intelligence and automation will improve, and the value of the combined human and machine will increase.  

When designing new products which are enabled by new technologies, consider the “separation of responsibilities” to decide which features will be completed by the human user and which will be completed by the machine. If you get this separation wrong, you will run into two potential problems. The first is feasibility; if we push the tech envelope too far, it will not perform. On the flip side, if you leave too much with humans that could have been performed by a machine, you are underutilizing and under optimizing the combined performance. This tool can help convert solutions which rely on future technologies, into solutions that are made possible with today’s technologies

When considering the business impact of utilizing new technology, consider the costs associated with separating the responsibilities between human and machine. Leaving the human with too much to do has cost. But developing the tech that can take the responsibility away from the human does not come cheap either.


On all three accounts, if we align the stars right and separate the responsibilities between human and technology, we will get the most out of what a human can do and what a machine can do. We will have a product that leverages the capabilities of technology in the right way, is feasible, and viable from a business point of view.

More from Professor Ken Goldberg:

An Antidote to Automation Anxiety: Intelligence Amplification


Source link


Please enter your comment!
Please enter your name here