We're coming up on the end of the busiest season in the tech industry. Facebook held its F8 conference in April, Microsoft did Microsoft Build in early May, and Google just wrapped up their Google I/O conference. At the events, the companies—each guided by the grand vision of their CEO—laid out their strategies for the upcoming year.
Upon close investigation, any observer will notice that artificial intelligence is becoming increasingly important when it comes to product strategy. Speakers from each of the three companies talked at length about AI at their respective keynotes; Apple followed suit at WWDC a few weeks later.
Google spent 35% of the keynote this year talking about AI-powered initiatives and products, such as Google Assistant, Google Photos, or YouTube, compared to only 18% last year.
An increase like this should get any manager thinking. Those who have already created an AI strategy should stop to reassess whether these recent developments render it more or less relevant. More importantly, however, those who don’t have an AI strategy ready should start working on one immediately. Unfortunately, most executives and product managers have very little understanding of how to "apply AI" today. It’s a new and difficult paradigm. There’s a lot of low-level tech and math involved.
If you don’t want to stay behind, this primer will help you cover much of the groundwork and allow you to craft your own AI strategy.
What’s in it for you?
If you’re new to the topic, this study will give you a good overview of the basic elements of an AI-driven product strategy, as well as the prerequisites necessary for creating one. You’ll also attempt to identify tasks and areas which can be automated using machine learning and I hope I’ll encourage you to think about the data your company collects in a new, AI-oriented way. Last but not least, we’ll discuss the technical expertise and talent needed to execute the strategy—I don’t expect you to have excellent command of the technological aspect of the subject, but I won’t stop to explain basic terms like machine learning or training data.
This is a primer on AI strategy, not AI itself. If you’re looking, InfoQ published a good minibook on the latter. And if you have already put some thought into using AI to secure a competitive advantage, you can always use this primer either as a validation measure or, perhaps, a checklist to make sure you haven’t forgotten anything.
Why do I think I’m the right person to help you?
Well, I may have faced similar problems not that long ago. Even though I’m a proficient programmer, I didn’t major in Computer Science and haven’t attended any formal artificial intelligence courses—even if I had, chances are I wouldn’t have fared better than many software engineers who find themselves lost in this new AI-oriented environment despite their academic background.
Two years ago, I set out to change that. I read voraciously, talked to experts, and eventually built my own AI tech-driven startup.
Why should you trust me when it comes to products and strategy?
My background and experience in the subject are fairly extensive—as a product manager, I’ve built software products for startups and enterprises from the United States, Australia, the United Kingdom, Canada, Germany, and Kenya. I also wrote a book about product management
Without further ado, let’s dive into the details of building an AI strategy. We shall begin by briefly discussing the elements making up the micro level of such a strategy.
👩💻 Working with the AI strategy checklist
There are four things a company must do in order to create an AI strategy:
- identify tasks for AI to automate;
- collect and refine data sets for AI to learn from;
- decide on a third-party strategy;
- hire data scientists and AI-oriented software engineers.
Identifying AI-friendly tasks should be the cornerstone of your strategy. A task—also called a skill in Amazon’s ecosystem, or action in Google’s—is a job which can be automated using machine learning. Everything else should logically follow the tasks you identify.
As soon as you choose your tasks, you will know what kind of data you’ll have to collect or obtain in order to train your algorithms. Tasks will also help you decide your third-party strategy—by which I mean your ability and willingness to use third-party AI services built by other companies, both big and small. Your third-party strategy will, in turn, define your hiring strategy for the months to come. In short, the more you want to do in-house, the more experts you have to hire.
Now that we know what the general process looks like, we’re going to discuss each element of our checklist in detail—starting with tasks.
✅ Identifying AI-friendly tasks using the PAC grid
Every month, we’re hearing news of new advances in the AI space. Each such breakthrough means that neural networks have learned to handle new tasks. Two decades ago, computers famously learned to play chess; now they can also play go. AI helps run Google’s server rooms, diagnose cancer, optimize savings accounts, translate text and images, identify spam, fight online harassment, and so on.
Does it mean everything can now be a task for AI to handle?
This is way too optimistic, even given the latest research. How, then, can non-technical executives identify areas where AI can be helpful?
The quick answer is—they should make a PAC grid.
I first learned of the PAC grid from Rob May’s blog where he frequently writes about the new AI paradigm. “PAC stands for Predict, Automate, and Classify,” he says. “These are three things that current A.I. technologies can do really well.” We’ll talk about Predict, Automate, and Classify tasks at length in a moment. “To make your first grid,” continues May, “make three columns, one for Predict, one for Automate, and one for Classify. Then on the rows, list key areas of your business.” When filled with new ideas, the matrix provides a good framework for brainstorming AI jobs which are based on tasks we already know have been deployed in production with great success by other companies.
An empty PAC grid with three areas—Product, Operations, and Pricing—ready for some AI brainstorming.
The important part is that AI tasks are almost always derived from previously manual tasks. In this sense, AI often improves the execution of jobs which you are already doing right now—but it makes the experience smoother or cheaper. Don’t use the PAC grid looking for new revolutionary applications. In my experience, this doesn’t work out very well. Your existing product or service is the most obvious area where you can try it out, but others will fit well, too. For example, in the next section, we’ll see how Uber applies the prediction task to their business model and how you too can fill the Predict column with great ideas.
📈 Identifying prediction tasks
A prediction task is a machine learning task of making future predictions based on past data. The Predict job is one of the most established ones in the AI world, as it usually deals with statistical data, scalar values, and linear regression, something most financial institutions have been dealing with for a long, long time.
In fact, prediction tasks work so well in that industry that many firms who used to employ dozens of traders along with a handful of software engineers now do the opposite and hire dozens of data scientists who work with just a few traders. Numerous trading firms even started using machine learning techniques on data feeds for automated trades. Many sales are already commissioned by machines on one end and approved by different machines on the other.
Real estate is another example. If you look at Coursera’s “Machine Learning Foundations” course, you’ll notice that one lesson discusses learning a simple regression model to predict house prices from house size. The model is based on The Boston Housing Dataset, which contains information collected by the U.S Census Service on housing in the area of Boston, Massachusetts, and is often used by beginners as a learning resource. A real estate company attempting to gain a competitive edge should really look into hiring a team of software engineers—if it hasn’t already.
I’ve already mentioned Uber—the company applies AI as a core part of their business model. Uber uses the prediction task to change their pricing on the fly.
Surge pricing is an obvious example of Uber trying to predict supply and demand in order to activate their drivers. While surge pricing also deals with one-time events, such as concerts or festivals, the demand also goes up according to predictable trends—like, for example, every Friday evening.
Uber also calculates its riders’ propensity for paying a higher price for a particular route. If you travelled from one wealthy neighborhood to another, you could be charged more. Such price discrimination tends to be highly efficient because, in an ideal world, you’d want to charge each user up to what they are willing to pay. And a good prediction task can automatically figure out the sweet spot between a price that’s too low and a price that’s too high.
One can conclude that the prediction task deals best with forecasting prices, but that’s true only to some extent. Many companies have been successfully using the Predict job in other fields, too.
Amazon is a great example. Each time you buy anything at the everything store, Amazon’s algorithms try to predict what you’re going to buy next and deliver friendly recommendations based on your previous purchases. That particular application might sound boring, because we’ve had recommendation systems for years now but the difference here is that machine learning models can learn, change, and respond on their own while the previous models relied on pure statistics and were unable to adjust to varying results. There are multiple areas where these new models can now be deployed and utilized and I am certain that you too will be able to identify a couple in the course of analyzing your own business using the grid framework.
🤖 Identifying automation tasks
Now that we know what the “P” in “PAC” stands for, we can move on with our AI strategy to the “A”—the automation task, a machine learning task entailing the automation of a previously manual task, such as customer support, text summarization, or writing press releases. We can soon expect AI to get even better in generating, modifying, and refining simple texts, images, or sounds with little human input.
A lot of knowledge jobs can already be automated if they fit the following rule:
Whenever a job done by a human can be broken up into discrete activities which fit the rest of the PAC grid, AI can automate it with a level of success that varies from job to job.
Let’s apply this rule of thumb to a customer support-oriented example. Most support cases can be broken up into two smaller tasks: classifying incoming questions based on a predefined set of previous questions and predicting the expected answer. As both classification and prediction are in the PAC grid, automating customer support in narrow domains is technically possible—as evidenced by the recent explosion of chatbots.
Similarly, medical diagnosis can often be automated, too, because it’s possible break the high-level task down into a number of smaller ones, including symptom classification and prediction of possible treatments.
In both cases, AI helps us reduce the cost of a job—expressed either in less money spent on maintaining call centers or less time spent on initial diagnoses.
However, there’s a huge caveat—the AI will always be only as smart as the list of examples it can draw upon. Because we’re not yet anywhere close to the area of spontaneous or meaningful conversations, you should think more of simple support queries like “Can I get a refund?” or, in the case of an HR automation bot like Talla, for example, “What’s your vacation policy?” Managing expectations will prove key in this instance. But then again, a lot of support queries are fairly simple, they often recur, and answers don’t change a lot as time passes—that’s why call centers can write scripts and not worry about high employee retention. If you’re able to identify similar information flows in your company and break them down into smaller, easy-to-handle tasks, you too will be able to unleash the power of automation.
🌇 Identifying classification tasks
Last but not least, classification, which we’ve discussed it briefly in the previous section. A classification task is a machine learning task that automatically assigns similar items to a common group. Traditionally, classification has been used to perform jobs such as recognizing and preventing spam or quarantining sensitive content, more recently, however, the number of potential applications of this particular task has broadened significantly.
I’ve already mentioned two examples before: classifying support queries and medical symptoms. The other obvious example is classifying images—a task with consistently high levels of accuracy. And I don’t mean static images only; with methods like deep learning, even live camera data can yield accurate results, classifying objects in real time. In such cases, AI helps you improve your value proposition by letting you create a smarter email client like Gmail or a smarter photo sharing app like Google Photos, which knows how to effortlessly recognize people and places in the photos you take in order to create smart albums.
Other applications can be ground-breaking, too. For example, thanks to HBO and their hit TV show Silicon Valley, there's finally an app that will tell you whether the object you’re pointing at with your phone's camera is a hot dog or not. It is very good.
Jokes aside, the classification task can help you review any kind of data that you can label, even if there are multiple attributes to take care of. Law is a good example. Soon, AI will help lawyers and judges classify cases and accurately predict whether or not any statutes or regulations were broken. After all, common law is based on precedent, and a precedent-based system does classify prior verdicts in order to predict newer decisions based on the older ones. Similarly, civil law often assumes that law is a closed system made up of a finite set of discrete rules. Given that the rules are kept up to date and each legal case is properly labeled and classified, it should be possible to accurately predict new rulings with AI.
In fact, in 2016, a machine learning system was able to look at legal evidence and choose how a case should be decided with 79% accuracy. Lead researcher, Dr. Nikolaos Aletras from University College London, said: “We don't see AI replacing judges or lawyers, but we think they'd find it useful for rapidly identifying patterns in cases that lead to certain outcomes.”
“Rapid identification of patterns” is, in other words, classification.
So if you look at your company closely and are able to spot areas where the Classify task could apply, you should consider using AI as an augmentation tool capable of greatly reducing time and cost investments. This method is particularly powerful when combined with techniques like value stream mapping, because value stream mapping requires us to split any given process into discrete chunks and observe the flow of business value in the stream—from development to production and right down to the end user. When the flow is not optimal, the process of delivering value gets blocked. For example, many courts are so overwhelmed with their caseload that getting from trial to sentencing can take years. The Bar points out that the process is so protracted because each case must be initially reviewed by a judge who then has to personally decide whether a case should proceed or be dismissed. If classification and prioritization of cases could be handled by AI, the review process would speed up, causing the value delivery flow to improve dramatically.
✅ Collecting and refining data sets
We saw how we can use the PAC grid to identify tasks and jobs to brainstorm AI-powered applications—we’ll now move on to the next item on our checklist: collecting and refining data sets.
An engineer can come up with the greatest algorithm in the world, but it won’t do any good on its own—to function, AIs require data to learn from and the more patterns they watch, the better results they produce. Remember, however, that the size of data sets you’re going to need will vary from task to task. Acquiring data sets comprising thousands or, preferably, millions of data points is not an easy feat and often requires highly coordinated efforts—and as such, should be every manager’s area of interest.
In this section, we’ll discuss the different types of data your company can use, as well as the methods of acquiring and refining them so that machines can learn from them.
🎓 Differentiating between data sets
There are five basic kinds of data sets:
- numerical data—any kind of plottable numbers such as The Boston Housing Dataset;
- behavioral data—discrete behaviors of your users which we can infer patterns from, like Uber does with supply and demand;
- text data—chats, transcripts, articles, tweets, or scraped websites;
- image data—pictures of items you want to classify;
- sound data—voices, conversations, commands, dialogues, podcasts, recordings, and so on.
Most companies show a heavy bias towards numerical, behavioral, and textual data because such data sets are easy to collect and the firms usually already have some. Images and sounds, on the other hand, require considerable storage and processing capabilities. Preferably, data should be collected based on the tasks you want to perform, not the other way around. Limiting yourself to tasks that you already have data for can only be a harmful half measure—if, for example, a competitor embraces the new AI paradigm by starting to collect more diverse data, your transition will only be more painful.
Another issue to consider is that there has been a lot of development recently in terms of image and voice recognition with software advancing to human-like levels of accuracy. And with the advent of tools like Amazon Echo, Google Home, or Siri, voice is becoming the most popular and the most underrated AI technology worldwide. Having invested heavily in their voice-based products, giants such as Google, Amazon, Apple, and Microsoft will only drive the research in this field forwards, making it cheaper and more accessible in the long run. I’m not suggesting you base your AI strategy on voice regardless of the costs, but dismissing non-obvious possibilities out of hand might be premature.
💵 Acquiring data
Knowing what kind of data you’re going to need is one thing; acquiring it is a completely different matter. If you already have a lot of data, you’re home. If you don’t, you’re… well… a long way from home. This is a case most startups will find themselves facing. Unlike the mobile revolution, the AI revolution will heavily favor incumbents who most likely already have numerous data sets at their disposal. If you’re a founder, however, don’t despair prematurely. Jeffrey Eisenberg reports that “79 percent of businesses obsessively capture Internet traffic data, yet only 30 percent of them changed their sites as a result of analysis.” If the same is true for data other than traffic, startups will still have a lot of leverage to use to gain an edge over their competitors.
When missing a necessary data set, you may want to acquire the data from an external source or to collect it on your own. If it’s the former, there are data marketplaces to buy or download data from—Trimble Data Marketplace, for example, where you can acquire a lot of mapping data. Google has released several of their data sets to the public, including Google Trends Datastore, Google Books Ngrams, YouTube-8M, Open Images Dataset, Amazon has a large repository of public data sets as well. You can also try to secure a license from other companies who have the information you need.
If you want to pursue the latter option, you’ll have to make sure you’ve got enough resources to collect the data on your own within a reasonable timeframe. If the data you need can easily be scraped from the Web, a couple of software engineers should usually do. Sometimes, though, historical data is difficult or impossible to obtain and has to be collected in real time. In this case, you can temporarily put a trained operator behind the wheel—a technique known as a Wizard of Oz scenario. In a typical Wizard of Oz application, an unaware user believes that a fully functional AI-powered system exists while in fact humans do all the job behind the scenes. As long as the user still gets a meaningful result from the fake prototype, there’s no harm done—and we can begin collecting the data we need.
✍️ Labeling data
As soon as you start collecting any data, you’ll have to refine it so that it can be processed by machines. And when it comes to getting data sets ready for machine learning, there are two important terms you need to be familiar with:
- Supervised learning—A task of inferring a result from labeled training data,
- Unsupervised learning—A task of inferring a result to describe hidden structure from unlabeled data.
Most popular machine learning applications deployed at scale expect labeled data sets for supervised learning. While there’s state-of-the-art research for dealing with unsupervised data, it’s still too early for companies without dedicated teams of scientists to use such methods reliably in production. Thus, we’re stuck with supervised learning for at least a generation of AI-powered software—which means that you’ll have to label data yourself by hiring humans, either in-house or by outsourcing.
Figure 2 Labeling personal data in text samples.
HIT marketplaces (HIT stands for Human Intelligence Tasks) like the Amazon Mechanical Turk or Scale can help you with the latter. Such marketplaces let you outsource human microtasks such as image annotation, transcription, categorization, comparison, or data collection. Each task is done by paid workers who will get it done for you in a matter of minutes. As your database of examples grows, the initial inefficiency will be slowly eliminated as your algorithms use the data labeled by humans to learn and get better over time, gradually replacing the fake Wizard of Oz application you started with.
✅ Deciding on a third-party strategy
We’ve identified the tasks; we’ve got the data—are we finally ready to begin working on our first AI-powered product? Well… not yet. Before we get to that, we need to develop our third-party strategy—namely, we must understand how much of the required AI technology we must build on our own and how much we can get from elsewhere.
Crunchbase, a popular business information platform, knows of more than 1200 startups who deal with artificial intelligence, each dealing with different aspects of the AI landscape. Some train neural networks to process text; others deal with images, sounds, or numbers. Some build low-level tech or even specialized hardware; others maintain high-level platforms for crunching knowledge. One of these startups may be already doing something you can use today, instead of spending months to build your own version in house. You should watch what market leaders are doing, too. In 2016 alone, Google, IBM, Yahoo, Intel, Apple, and Salesforce acquired more than 40 AI startups—that makes about 140 acquisitions in the AI space since 2011—and used these new resources to build and grow their own platforms such as Google’s API.AI or Facebook’s Wit for text, Microsoft’s Bot Framework and Cognitive Services for text and pictures, or Amazon’s Polly for text-to-speech.
If you need another example, Vize is a custom image recognition API which can learn whatever you need to recognize, whether it’s product images, webcam photos, or histological sections. If you wanted to, Vize would be able to handle the classification task for images for you.
The upside is that you don’t have to build the tech on your own. But the downside is like another side of the same coin—since you didn’t build the tech, you don’t control it. Lack of control won’t hurt most popular applications, but can yield diminishing returns with more specialized use cases—a problem you should take into account.
In general, I think you ought to keep your core competences close to your chest, controlling both the user experience and technology behind it, but try to cut corners by being smart and resourceful in less relevant areas. Thankfully, it’s a problem many executives have already tackled when dealing with the transition to cloud and SaaS services from self-hosted solutions.
✅ Hiring the talent required to deploy an AI strategy
Choosing a third-party strategy is something a product manager should never do alone—especially when they lack the required technical expertise and experience. That’s why we should finally talk about human resources; mainly, hiring the talent required to deploy an AI strategy. An enormously difficult and expensive job in general, hiring can become even more taxing when it comes to attracting data scientists, researchers, and software engineers with experience in building AI-enabled software. The supply is limited, while the demand is growing. Don’t believe me? Listen to Google’s Jeff Dean:
A leader in the firm’s ML effort, Jeff Dean, who is to software at Google as Tom Brady is to quarterbacking in the NFL, […] estimates that of Google’s 25,000 engineers, only a “few thousand” are proficient in machine learning. Maybe ten percent. He’d like that to be closer to a hundred percent. “It would be great to have every engineer have at least some amount of knowledge of machine learning,” he says.
Thus, you should expect to pay extra for an engineer who knows how to do machine learning—which is also an important feat to consider when choosing your third-party strategy.
For companies operating on a shoestring budget, the demand for software engineers who know their way around artificial intelligence will determine a lot of other factors in their AI strategy. Aggressive poaching is something you should defend against, too—in private conversations, I’ve heard about entire teams of researchers being bought out from universities by big companies.
An alternative approach would be to take the long-term path and promote bottom-up development by having the engineers you already have on your staff learn the new paradigm—in some cases, this approach may even be faster than absorbing outside expertise as new hires often don’t pan out.
Smaller companies don’t need to do their own research; they can ride on the backs of the others. Granted—we’re still waiting for Ruby on Rails of the AI world, but we’re getting closer and closer as more companies release their own tools on open source licenses for the benefit of the entire industry.
For example, here’s an open source driving agent created by comma.ai. Yes, you heard me right—there’s a library on GitHub for everyone to see that deals with self-driving cars. Manning Publications—my publisher—has already released more than a few books about machine learning for popular use, including “Real-World Machine Learning,” “Machine Learning in Action,” "Machine Learning with TensorFlow,” “Reactive Machine Learning Systems,” “Grokking Deep Learning,” or “Deep Learning with Python.”
While it may seem that we’re already in the middle of an AI boom, the truth is that we still have a long way to go. Even Google’s progress, while technically impressive, is still considered by some analysts as having “precious little practical application.” Assuming a long-term perspective of human resources might be more valid than many impatient managers may currently think.
🎬 Summary
That’s it, we reviewed the entire AI strategy checklist. Hopefully, I helped you—at least to some extent—with each of the four points we discussed: tasks, data, third-party platforms, and hiring. If you have any questions, please let me know—I’ll answer in the comments.
Work with a team you can trust
Working with us guarantees shared knowledge of 80+ experts and starting your software development in weeks—not months. That means doing more business and less low-level work on your side.