Age of Artificial Intelligence

It’s hard to know what to think about Artificial Intelligence these days. Almost everyone in the industry believes it to be a revolution as deep and as fundamental as the industrial revolution or the birth of the computer. Yet the field seems divided over when we will get to general intelligence and what that will ultimately mean for us. Some believe it will be our greatest invention yet and others that it may lead to our doom.

AI is hot, I mean really hot. VCs love it, pouring in over $1.5B in just the first half of this year. Consumer companies like Google and Facebook also love AI, with notable apps like Newsfeed, Messenger, Google Photos, Gmail and Search leveraging machine learning to improve their relevance. And it’s now spreading into the enterprise, with moves like Salesforce unveiling Einstein, Microsoft’s Cortana / Azure ML, Oracle with Intelligent App Cloud, SAP’s Application Intelligence, and Google with Tensorflow (and their TPUs).

Artificial Intelligence (AI) is rapidly becoming the biggest issue on the agenda for many businesses. The current speed of development, the sheer range of possible applications, and the potential impact of AI suggest that it's time for CEOs to pay attention.

What is Artificial Intelligence?

AI will change the philosophy, practice and management of business. It is beginning to transform businesses and replace even senior management and leadership roles. CEOs must invest the necessary time and attention to understand what AI is and where the opportunities are.

Distilling a generally-accepted definition of what qualifies as artificial intelligence (AI) has become a revived topic of debate in recent times. Some have rebranded AI as “cognitive computing” or “machine intelligence”, while others incorrectly interchange AI with “machine learning”. This is in part because AI is not one technology. It is in fact a broad field constituted of many disciplines, ranging from robotics to machine learning. The ultimate goal of AI, most of us affirm, is to build machines capable of performing tasks and cognitive functions that are otherwise only within the scope of human intelligence. In order to get there, machines must be able to learn these capabilities automatically instead of having each of them be explicitly programmed end-to-end. It’s amazing how much progress the field of AI has achieved over the last 10 years, ranging from self-driving cars to speech recognition and synthesis. Against this backdrop, AI has become a topic of conversation in more and more companies and households who have come to see AI as a technology that isn’t another 20 years away, but as something that is impacting their lives today. Indeed, the popular press reports on AI almost everyday and technology giants, one by one, articulate their significant long-term AI strategies. 

AI will impact the entire economy, actors in these conversations represent the entire distribution of intents, levels of understanding and degrees of experience with building or using AI systems. The pace of AI development has caught most unaware. Large scale investment is being made by companies like Google, IBM, Microsoft, Uber and Baidu. In some cases, firms are literally "betting the ranch" on AI.  We might soon see anything from robot lawyers to ultra-intelligent mobile personal assistants. Many firms are looking at relatively narrow deployments to automate rule-based decision making and predict future demand and customer behavior from accumulated data. Others are looking at much broader uses such as intelligent HR, finance and legal advisors and real-time data analytics of live transactions.

Perhaps some of the most extreme applications include the creation of "human free" automated businesses where everything from strategy to operational processes are embedded "in the system". These so-called "distributed autonomous organizations" could become increasingly common. They are already being used to manage millions of transactions - e.g. the automated dispute resolution systems in online auction platforms. AI is already being deployed to automate clerical, manual and semi-skilled labour and is now supporting and replacing professionals in domains such as engineering, medicine, legal, and accountancy.

If you are building an AI-First application, you need to follow the data — and you need a lot of data — so you would likely gravitate towards integrating with big platforms (as in big companies with customers) that have APIs to pull data from. There’s so much valuable data in a CRM system, but five years ago, pretty much no one was applying machine learning to this data to improve sales. The data was, and still is for many companies, untapped. There’s got to be more to CRM than basic data entry and reporting, right? If we could apply machine learning, and if it worked, it could drive more revenue for companies. So naturally, we (Infer) went after CRM (Salesforce, Dynamics, SAP C4C), along with the marketing automation platforms (Marketo, Eloqua, Pardot, HubSpot) and even custom sales and marketing databases (via REST APIs). We helped usher a new category around Predictive Sales and Marketing. Now, with machine learning infrastructure in the open — with flowing (and free) documentation, how-to guides, online courses, open source libraries, cloud services, etc. — machine learning is being democratized. Anyone can model data. Some do it better than others, especially those with more infrastructure (for deep learning and huge data sets) and a better understanding of the algorithms and the underlying data.

You may occasionally get pretty close with off-the-shelf approaches, but it’s almost always better to optimize for a particular problem. By doing so, you’ll not only squeeze out better or slightly better performance, but the understanding you gain from going deep will help you generalize and handle new data inputs better — which is key for knowing how to explain, fix, tweak and train the model over time to maintain or improve performance. It’s very important to understand where the user needs their predictions — and this may not be in just one system, but many. We had to provide open APIs and build direct integrations for Marketo, Eloqua, Salesforce, Microsoft Dynamics, HubSpot, Pardot, Google Analytics and Microsoft Power BI. Integrating into these systems is not fun. Each one has it own challenges: how to push predictions into records without locking out users who are editing at the same time; how to pull all the behavioral activity data out to determine when a prospect will be ready to buy (without exceeding the API limits); how to populate predictions across millions of records in minutes not hours; etc.

These are hard software and systems problems (99% perspiration). In fact, the integration work likely consumed more time than our modeling work. This is what it means to be truly “predictive everywhere.” Some companies like Salesforce are touting this idea, but it’s closed to their stack. For specific solutions like predictive lead scoring, this falls apart quickly, because most mid-market and enterprise companies run lead scoring in marketing automation systems like Marketo, Eloqua and Hubspot. Let’s think about use cases and making predictive disappear in a product. This is a crucial dimension and a clear sign of a mature AI-First company. There are a lot of early startups selling AI as their product to business users. However, most business users don’t want or should want AI — they want a solution to a problem.

AI is not a solution, but an optimization technique. At Infer, we support three primary applications (or use cases) to help sales and marketing teams: Qualification, Nurturing and Net New. We provide workflows that you can install in your automation systems to leverage our predictive tech and make each of these use cases more intelligent. However, we could position and sell these apps without even mentioning the word predictive because it’s all about the business value. Predictive is core to the value but not what we lead with. Where we are different is in the lengths we go to guide our customers with real-world playbooks, to formulate and vet models that best serve their individual use cases, and to help them establish sticky workflows that drive consistent success. We’ll initially sell customers one application, and hopefully, over time, the depth of our use cases will impress them so much that we’ll cross-sell them into all three apps. This approach has been huge for us. It’s also been a major differentiator — we achieved our best-ever competitive win rate this year (despite 2016 being the most competitive) by talking less about predictive. Vendors that are overdoing the predictive and AI talk are missing the point and don’t realize that data science is a behind-the-scenes optimization. It’s a fun category to be in (certainly helps with engineering recruiting) and it makes for great marketing buzz, but that positioning is not terribly helpful in the later stages of a deal or for driving customer success.

Machine Learning

The most important subset of AI is machine learning. A field which made big breakthroughs in the latter half of the 20th century but then lay dormant waiting for the processing power of computers to catch up with the demands that machine learning algorithms placed on them.  A key driver behind machine learning is the rise of big data. In 1992 we collectively produced 100 GB of data per day, by 2018 we will be producing 50,000 GB of data per second. Data is permeating into every aspect of life and there is too much out there for any person, or any group of people, to be able to parse. Machine learning is helping us organize and make sense of these mountains of data. TensorFlow, a popular library for machine learning, embraces the innovation and community-engagement of open source, but has the support, guidance, and stability of a large corporation. Because of its multitude of strengths, TensorFlow is appropriate for individuals and businesses ranging from startups to companies as large as, well, Google.

TensorFlow is currently being used for natural language processing, artificial intelligence, computer vision, and predictive analytics. TensorFlow, open sourced to the public by Google in November 2015, was made to be flexible, efficient, extensible, and portable. Computers of any shape and size can run it, from smartphones all the way up to huge computing clusters. Automated Machine Learning (AutoML) has become a topic of considerable interest over the past year. AutoML is not automated data science. While there is undoubtedly overlap, machine learning is but one of many tools in the data science toolkit, and its use does not actually factor in to all data science tasks. For example, if prediction will be part of a given data science task, machine learning will be a useful component; however, machine learning may not play in to a descriptive analytics task at all.  Even for predictive tasks, data science encompasses much more than the actual predictive modeling.

Randy Olson (Data Scientist) states that effective machine learning design requires us to:

1. Always tune the hyperparameters for our models

2. Always try out many different models

3. Always explore numerous feature representations for our data

Taking all of the above into account, if we consider AutoML to be the tasks of algorithm selection, hyperparameter tuning, iterative modeling, and model assessment, we can start to define what AutoML actually is. Automated machine learning will quietly become an important event in its own right. As deep neural networks, automated machine learning will begin to have far-reaching consequences in ML, AI, and data science, and 2017 will likely be the year this becomes apparent. In the near future, automated machine learning (AutoML) taking over the machine learning model-building process: once a data set is in a (relatively) clean format, the AutoML system will be able to design and optimize a machine learning pipeline faster than 99% of the humans out there.

One long-term trend in AutoML that these systems will become mainstream in the machine learning world. All the methods of automated machine learning are developed to support data scientists, not to replace them. Such methods can free the data scientist from nasty, complicated tasks (like hyperparameter optimization) that can be solved better by machines. But analyzing and drawing conclusions still has to be done by human experts -- and in particular data scientists who know the application domain will remain extremely important.

The author Rajesh Angadi is Bachelor of Engineering, PMP with a Master in Business Administration and is Hadoop Certified. He possesses more than two decades of experience in Information Technology and worked with industry majors like Unisys, Intel, Satyam, Microsoft, Ford, Hartford, Compaq, and Princeton.

Machine Learning.jpg

Get the amazing news right in your inbox