Written by Mark Tappan
on August 30, 2018

Just listening to or reading about artificial intelligence (AI) and the end of humankind is nigh. It is interesting how many times the discussion starts with or makes tongue-in-cheek references to metallic overlords with tons of artificial intelligence who eventually enslave or cause the extinction of humankind – and it’s all our own fault. Well, maybe it will come to pass, but in the meantime, let’s stop worrying about it and make friends with artificial intelligence. No doubt change including the adoption of new concepts is hard and as we get older, change gets harder. One of the hardest parts of change is not knowing what life will be like on the other side. 

This is not about the Sixth Extinction – the AI-led inevitable extinction of humankind. It is about learning how to co-habit with a useful and enabling technology. AI is not new, though is it really just starting to impact society. My first encounter with AI was learning about a rules-based AI system called MYCIN developed at Stanford in the 1970s. The premise behind AI is to make human existence better - automate and free us from the drudgery and expand our cognitive horizons.

Building a Trust of AI

How can we trust that life will be better, that we will live an evolved life, and we’ll achieve things beyond science fiction dreams? Interestingly, I think snakes and spiders and other unfamiliar creature suffer from the same prejudices and trust issues. I want to start with the first step: learning to understand artificial intelligence from a society perspective. The fascinating technical perspectives of computational neuroscience, convolutional neural networks and other approaches will wait for a future blog.

Understanding Artificial Intelligence

The term artificial intelligence is a common term heard in many walks of life, yet, there is no agreed upon succinct definition. Several definitions basically relate AI to mimicking some form of human cognition or “intelligence” to computer systems. I like the definition of artificial intelligence offered in the Gartner IT Glossary:

“Technology that appears to emulate human performance typically by learning, coming to its own conclusions, appearing to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance or replacing people on execution of non-routine tasks."

This perspective echoes the viewpoints of others such as Andrew Ng and emphasizes several key points:

  • “Emulate human performance” – This implies that AI can be applied to tasks that humans can already do and can measure. It places AI solidly in a support role to free up humans from tasks they currently perform to learn and perform new tasks.
  • “Enhancing human cognitive performance or replacing people” – Many customers describe how more time is spent ingesting and moving data between tools then processing data into actionable information. We are wasting valuable human cognition on these and other mundane tasks. However, if these tasks could be decomposed into sequences of small efforts, we could employ AI techniques to free humans to focus on greater cognitive tasks.
  • “Typically by learning, coming to its own conclusions” – The class of AI systems known as rule-based systems are limited by the complexity of the rules that can be defined and codified by humans. Rule-based systems are good examples of deterministic, codified human intelligence. They lack the autonomy we expect as part of “intelligence.” Learning systems, on the other hand, exhibit more of the autonomy we expect in AI and can independently adapt their rules. Through machine learning approaches, AI systems are more adaptive and reactive to their data environment and can draw conclusions without relying on pre-ordained human rules.

This independent adaption, however, is part of the distrust humans have with AI systems (or other humans) who produce results without justification or explanation. One way to address the trust and transparency challenge is a hybrid approach. In the hybrid approach, the learning system can ask for human assistance and can respond to human requests for explanation of actions (or decisions).

  • “Execution of non-routine tasks” – “Non-routine” can refer to tasks that require some degree of adaptation of tools and services to complete the objective. I believe adaption of tools in response to variations in tasks is a core characteristic of “intelligence.” So, just like the need to break operational activities into bite-sized mundane tasks, the implementation of AI systems must identify and characterize non-routine tasks in operational activities.

Old School Thinking to the Rescue

Employing and deploying systems with artificial intelligence into an enterprise or business requires more thought and planning then simply buying and deploying tools. A prevailing sense of distrust and endangerment by workers and organizations hinder adoption and acceptance of AI technologies. Startup companies and entrepreneurs face these same dilemmas in their quest to bring new products and services to market. In fact, scientists have struggled with the adoption of new and sometime radical ideas for centuries. 

Approaches such as the Scientific Method and its entrepreneurial cousin the Lean Startup methodology are ideal for helping organizations explore, evaluate, educate, and adopt artificial intelligence systems. Both approaches are experiment based with heavy reliance on measuring results, adjusting principles and goals and then more experimentation. Entrepreneurial approaches add a heavy focus of customer development throughout the process – a critical element in the integration of AI into the enterprise. Customer Development about developing products (or I content AI Systems) that your customers (employees) want and will use by talking with them, engaging them in the process, listening and adapting to their input. It’s an ideal approach to building trust while building the right solution, tradecraft modernization, and training.

Strategy & Vision. The development, coordination and iteration of a strategic vision for an AI project is the critical communication of the organization’s goals and associated rationale. As part of the customer development – which is better labels stakeholder development for AI projects - it educates and informs all stakeholders – leaders, workers, partners, and customers – on the how and why. The Vision may require several iterations through the hypothesis-experiment-assessment cycle. Stakeholders react better and provide positive support when included in the plan(s) and when they see themselves in the plan(s).

Find a Target Activity. There are likely to be multiple “good” candidates in the organization where artificial intelligence systems can help achieve the vision. Organizations can structure experiment cycles that employ methods such as Human-Centered Design to can help organizations better assess candidate operational activities. The goal is to find those mundane, non-routine types of tasks. Stay away from tasks that are hard for humans or hard to articulate.

Conduct a series of hypotheses, experiments (e.g., prototypes), and collect measured feedback from potential customers. It will likely take many iterations to find good candidate activities. Some activities may be tasks or processes currently performed in the organization, others maybe be new capabilities that AI can enable.

Adopt AI. There are many facets to the constructing, integrating, and adopting AI systems. The figure below illustrates a few of them. The new AI system will change how your organization functions, after all, that’s the point. How will it change/impact other operations? It will require new tradecraft, new skills, and so on. While you are experimenting and constructing the technology side, you must work on the operational, business, and human aspects as well. What better way then by using coordinated experiment cycles to inform your AI team on different aspects of the project. How many great technologies have been deployed to staff untrained in their use, or unfamiliar with their purpose? Stakeholder development is critical in every phase, but possibly most critical in this phase. The Adopt AI phase is when stakeholders get to kick the tires, understand the real potential of the system, and begin to recognize and support its value.

Adopting AI

Getting Started.

The initial phases of adopting artificial intelligence, like the initial phases of adopting any new technology, focuses primarily on social and business objectives. The interesting and fun technical objectives come later.

Stakeholder education and communication on what AI means to the organization is critical throughout the process. Focus on business value and unlocking potential of your employees. Most importantly, communicate that AI systems are not the Hollywood style science fiction AI. It is not about replacing your staff, but it is about finding new roles for them. AI is just another tool, and the adoption of AI is just like the adoption of other tools.  

Key takeaways:

  • Identify and describe how people perform applicable tasks. If you can’t describe how people do it, it will be incredibly hard to get AI to do it.
  • Break operational tasks into small, non-routine, achievable tasks and then build AI systems. Build AI systems that learn and adapt by interacting with employees. Human engagement and validation of AI performance increases transparency and trust.
  • Modernize human tradecraft to incorporate human-machine collaboration and train impacted employees on new skills that leverage AI systems.

Let Us Know What You Thought about this Post.

Put your Comment Below.

You may also like: