2019 Innovation Issue: Artificial intelligence is no longer science fiction

  • Comments
  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

The rapid rise of artificial intelligence in recent years has been simultaneously stunning, promising—and a bit scary.

AI has spurred visions of robot armies that, depending on whom you ask, could make your life much easier, take your job or even rule the world. Or some combination of those scenarios.

One reason for the wide disparity in predictions is that, while the term artificial intelligence seems to be everywhere these days, a shared understanding of what AI is and what it does definitely is not.

At its roots, AI exists when software mimics cognitive functions—such as learning and problem solving—that have traditionally been the purview of the human mind. It’s sometimes referred to as machine learning, artificial neural networks or deep learning—although some experts make distinctions among those terms.

Whatever you call it, there’s no denying that AI’s impact is already considerable and its potential is awe-inspiring.

According to a recent PricewaterhouseCoopers report, the global economy will see a $15.7 trillion impact from artificial intelligence by 2030. AI will cause a 26% boost in GDP worldwide, and China and North America will see nearly 70% of the global GDP bounce, the report concludes.

Improvements in AI are the driving force behind the surging autonomous car industry. Financial services firms can benefit from AI-based process automation and fraud detection. Logistics companies can use AI for better inventory and delivery management. Retailers can track and accurately predict consumer behavior. Utilities can use smart meters and smart grids to lower power consumption.

Marketing and marketing tech firms like DemandJump, Emarsys and Genesys—which all have local presences—are already employing AI technology in a big way.

Amazon’s Alexa, Google’s Home, Apple’s Siri and Microsoft’s Cortana all use AI-based algorithms not only to give people information, but also to study their purchasing habits and lifestyles.

Most future transactions will be completed with the help of an AI-based chatbot or virtual assistant. In fact, many already are. And most consumers never even know—increasingly unable to tell the difference between interacting with a bot or with another person.

A variety of organizations are using artificial intelligence to make major business decisions. Coca-Cola, for instance, released Cherry Sprite based on AI product analysis. And Coke officials said a virtual assistant incorporated into its vending machines is right around the corner.

Despite the enormous early impacts, the technology is clearly still in its infancy. According to Gartner’s 2018 CIO survey, only 4% of companies have invested and deployed an AI-based solution. But that number is growing rapidly, tech experts told IBJ.

Years in the making

While the reality of artificial intelligence is new to most people, the concept has been around for decades—and not just in science fiction thrillers like “Terminator,” “Star Wars” and, of course, the 2001 movie “A.I.” starring Haley Joel Osment.

The idea of inanimate objects coming to life as intelligent beings has been around for thousands of years. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built automatons before the time of Christ.

But the field wasn’t formally founded until a 1956 conference at Dartmouth College in New Hampshire, where the term “artificial intelligence” was coined by computer scientist and professor John McCarthy. Various strands of research under the heading of “thinking machines” were explored at the conference.

Crandall Crandall

The U.S. military began investing heavily in AI, but with little real progress. Federal authorities started pulling research funding in the late 1960s, triggering what computer scientists called the AI winter. AI has seen at least three cycles of boom and bust, industry experts said.

“Despite appearances, the evolution of AI has been a gradual one,” said David Crandall, an associate professor in Indiana University’s School of Informatics, Computing and Engineering. “It’s really come in fits and starts. In the last four or five years, though, it really seems to be taking off. And even more so within the last year.”

The most recent—and arguably most significant—AI boom has been fueled by some key advances.

“What’s making AI possible at the level we’re at now is access to large-scale data and access to greatly increased and improved computing capabilities,” said Karthik Kannan, management professor at Purdue University’s Krannert School of Business and director of the school’s Business Information and Analytics Center.

Kannan Kannan

“Consider, the iPhone has computational abilities greater than the original Mars Rover,” he said.

Algorithms—essentially sets of fixed instructions that allow computers to make calculations and other computations—are at the heart of artificial intelligence. AI-based algorithms allow software to make adjustments and come to new conclusions as more data is collected.

“To truly have AI, you have to have machine-learning algorithms that get smarter and smarter without human touch,” said Christopher Day, CEO and co-founder of DemandJump, a local marketing tech firm that has been using AI since 2015.

What’s in a name?

McCarthy’s coining of “artificial intelligence” seemed appropriate at the time but is now the source of significant confusion. His thought—and the thought of others then—was that computers would never have the capacity to think or reason. His idea was that the cognitive functions of the human brain were “real,” while any functions of computer software would be “artificial.”

Day Day

But today, computers work much like the human brain, which is why some experts use the term “neural networks” to describe artificial intelligence. Others prefer to distinguish between “machine intelligence” and “human intelligence.”

One thing is certain: The gap between what machine intelligence and human intelligence can accomplish is narrowing—or even blurring. And that has given life to fears that AI will, sooner or later, make all human workers dispensable.

Other fears are even more nefarious.

“There’s a lot of mystery about what AI is. That has a lot to do with what we’ve seen in movies about robot overlords,” IU’s Crandall said. “AI is really about finding patterns in data and being able to make predictions based on those patterns. AI isn’t brains sitting in test tubes.”

Sean Brady, president of Emarsys Americas, a marketing software firm that uses AI to predict retail customer behavior, also isn’t overly concerned.

Brady Brady

“The fear of AI goes all the way back to the movie ‘Terminator,’ when a robot was trying to eliminate the human world,” he said.

Emarsys, which has a major hub in Indianapolis, started its push into AI four years ago and uses it for various sales functions, including identifying a target audience based on past behaviors; matching the right marketing approach with the right customer or potential customer; and examining human behavior, such as when people are most likely to read an email that could translate into sales. Emarsys also uses AI to identify people who might stop using a brand and helps that brand win customers back.

Brady’s view of AI is pragmatic, not apocalyptic. “AI does analytical. People will always do the creative and artistic element,” he said.

“The way AI really works well is when it works with people. We believe the AI we create in our platform helps with decision making. It lets the machine do the mundane and gives you the data.”

Smarter than a human

AI excels in scenarios where it can quickly run calculations or project limited scenarios in rapid-fire succession to figure out the next move. But that ability, IU’s Crandall said, goes only so far.

“AI is superior at chess and math because those things are constrained. Pure computational and mathematical ability solves those problems. … It can do trillions of those within seconds,” Crandall said.

Smith Smith

Cameron Smith, vice president of product management for Genesys Telecommunications Laboratories Inc., said AI-driven software can churn through data, solve complex transactions and make forecasts in 30 seconds, a job that would take a human 1.6 million minutes—or a little more than three years.

“Can AI supersede the human mind? We have that today. In some cases, AI performs remarkably better than a human,” Smith said.

But can you ask an AI-driven computer to get up, open a door and walk across a street? No way, Crandall said.

“There are just too many problems to solve—issues with balance, propulsion and evaluation of environmental, not to mention societal, cues,” he said. Even truly autonomous vehicles are years away, he added.

“There are a number of problems just at a simple four-way stop that an autonomous car would have difficulty with,” Crandall said. “There are societal cues when you pull up at the same time and human drivers just with a glance at one another know who should go. AI can’t handle that.”

But Purdue’s Kannan said the reality of driverless vehicles hitting the road en masse is not far away.

Purdue’s Discovery Park is looking at the future of smart mobility. By 2025, Kannan predicted, we’ll see a proliferation of self-driving vehicles. And he said those vehicles could look radically different from what we know today.

“We should not think of autonomous cars as extensions of today’s cars,” he said. “Auto companies are starting to really think differently about this. We’re redefining what mobility looks like.”

The story continues after the timeline.

Industries such as health care, automotive, financial services, marketing and communications, and logistics could reap—in some cases, already are reaping—massive benefits from a dizzying array of AI deployments.

Artificial intelligence can assist health care providers with better tools for early diagnostics.

Becoming smarter

AI can do a lot more than we thought possible even five years ago. Its most advanced forms make Alexa look and sound like a preschooler.

Neural vision systems can recognize a specific make and model of vehicle, even when it is driving through the desert and is covered with armor, weapons, flags and soldiers. They can understand the difference between a gun sitting on a table, a gun pointed in the air, and a gun pointed at a person. They can estimate the location where a photo was taken, even if it looks dramatically different from the training images a neural-vision system has seen. In some areas, such as audiovisual analysis, deep-learning approaches have been tremendously successful at creating imagery and speech that is eerily human-like.

It’s difficult for many non-techies to understand just how far and how profoundly AI has advanced. And, indeed, the scope of AI progress is still being defined.

Back in 2011, a high-profile eye-opener was the success of IBM’s Watson supercomputer on the TV quiz show “Jeopardy.”

Facing certain defeat at the hands of the AI-enabled software, Ken Jennings—famous for winning 74 games in a row on the show—acknowledged the obvious. “I, for one, welcome our new computer overlords,” he wrote on his video screen, borrowing a line from a “Simpsons” episode.

For IBM, the showdown was not merely a well-publicized stunt with a $1 million prize, but proof that the company had taken a big step toward a world in which intelligent machines will understand and respond to humans—and perhaps replace some of them.

Five years later, in 2016, officials at the University of North Carolina School of Medicine used Watson to analyze 1,000 cancer diagnoses. In 99% of the cases, the software was able to recommend treatment plans that matched actual suggestions from oncologists. Not only that, but because it can read and digest thousands of documents in minutes, Watson found treatment options doctors missed in 30% of the cases. The AI processing power allowed the computer to take into account all the research papers and clinical trials the oncologists might not have read at the time of diagnosis.

Also in 2016, AI-driven Google software proved to be far superior to people at an ancient strategy game called ‘Go,’ which was previously thought to require human logic. That year, the software beat the world Go champion; in 2017, researchers said they had constructed an even stronger version of the program—one that can teach itself without a jump-start from human data input.

Even Emarsys’ Brady, who has little fear of AI, admits: “I’m not sure there are limitations on AI as you look at the way computers evolve. It’s an unknown world.”

Once considered nothing more than a figment of Hanna-Barbera’s cartoon imagination, the notion of George Jetson’s robotic maid, Rosie, seems a distinct possibility.

“Once you dream it, it’s just a matter of executing it,” Brady said. “I think some of those things from ‘The Jetsons’ could certainly become reality.”

Still, current AI shortcomings create serious concerns.

“Worrying about robots is not where to place your fear,” said IU’s Crandall. “My fear is applying AI in areas where we shouldn’t, such as certain medical situations. And we have to remember, these algorithms can discriminate. I worry AI could make decisions on factors we don’t understand and that, from a societal standpoint, don’t make sense.”

One example of AI gone wrong is Amazon’s failed attempt to use the technology to screen job candidates. The company scrapped that initiative in 2016—after nearly three years of development—when it discovered the AI-driven screener was discriminating against women.

AI talent shortage

In 2014, world-renowned physicist Stephen Hawking and business magnate Elon Musk fueled fears about AI when they publicly voiced the opinion that superhuman artificial intelligence could provide great benefits but also could end the human race if carelessly deployed. The Future of Life Institute—of which Musk and Hawking were board members before Hawking’s 2018 death—issued an open letter directed at the AI research community, warning about the difficulty of controlling AI if it reached certain levels.

Despite the ominous warning, most tech companies’ AI concerns are much more mundane. Chief among them is finding enough talent to keep up in the current arms race.

“In some markets, top AI professionals can make $1 million-plus annually,” Crandall said. “This is a very hot job sector. There is a really high demand for AI professionals we just can’t meet right now.”

Several tech experts said colleges are woefully behind in teaching the skills needed for AI.

Crandall said IU and many other schools nationwide are making a big push into AI. Massachusetts Institute of Technology, Carnegie Mellon University, the University of California at Berkley, and Harvard and Stanford universities have recently started departments dedicated to AI and machine learning. IU, Crandall said, has hired multiple faculty members to teach AI courses and is “starting new classes yearly in AI disciplines.”

Genesys, which has more than 5,000 employees, has 220 developers and another 80-plus support personnel dedicated solely to AI. Growing that staff has been a real challenge.

“It’s difficult to find people to work on AI because of the math required. This is cutting-edge,” Genesys’ Smith said.

And the competition for the big-money talent is arguably more challenging for small companies.

DemandJump, a 33-employee firm, has three staff members focused on research and development, math theory and data sets that deal solely with artificial intelligence.

“The data scientists and machine-learning engineers needed for AI are absolutely difficult to find,” said Day, DemandJump’s CEO. “People who work on AI have to have a deep understanding of complex mathematics and think outside the box to apply that.

“We’ve found that, in many respects, the ability to work on AI is either in your DNA or it’s not.”

So how do small companies compete for talent? “We give them a seat in the front row,” Day said. “We guarantee their work will be put into production and they can see the results.”

While working for a tech giant certainly has its benefits, many in the tech community say employees and their work can get buried in vast research and development divisions.

Universities are also struggling to find and retain AI talent.

“Lots of professors in this field are leaving academia,” Crandall said. “It’s difficult for schools to hire and keep top [AI] faculty because they’re competing with Google, Apple, Facebook, Uber, Tesla and Microsoft. They all have huge AI research labs, and those companies have the resources to really pay up for this rare talent.”•

Read more Innovation Issue stories.

Please enable JavaScript to view this content.

Editor's note: You can comment on IBJ stories by signing in to your IBJ account. If you have not registered, please sign up for a free account now. Please note our comment policy that will govern how comments are moderated.

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In