Image

The Future: Can Artificial Intelligence Learn to Be More Human?

You’ve probably seen the commercial for Google Assistant where pop star Sia is standing in front of a mirror during a wardrobe fitting. She is mentally making a list of things she needs to get done, when it occurs to her that she should probably get some flowers to thank her assistants. But of course, she can’t ask the assistants to buy their own flowers. That defeats the purpose. The commercial then helpfully suggests that she “Make Google do it.”

The commercial marks the introduction of Google Duplex, the latest Artificial Intelligence (AI) program behind Google Assistant, the tech giant’s answer to the well-known Siri and Alexa. Unlike it’s competitors, Google Assistant will not only search the Internet for you, it will literally act as your personal assistant—calling for restaurant reservations or checking the hours of your local grocery store. 

Creative application of AI is a research focus of Assistant Professor of Computer Science Brian O’Neill. He considers Google Duplex to be the headline news of the Artificial Intelligence industry, not only for its usefulness, but also for the innovation that went into programming the technology. While the Google Duplex is a creative use of technology, he hopes AI can be developed in more artistic avenues, as well as practical ones.

“What I’ve been working on at a broad level is AI that is creative or that helps people be creative,” he explains. “I’d like to see AI more involved with storytelling because that’s how we, as humans, communicate. So far, they’re terrible at it. Left to their own devices, all computer stories are bad because they don’t know what makes stories entertaining.”

Professor O’Neill, who teaches undergraduate courses in AI, began the process of helping AI improve its storytelling ability during his dissertation, starting with helping it recognize and re-create suspense. It’s an interesting challenge because computers don’t have the same frame of reference as humans, and no emotions to guide them. This means that Professor O’Neill has to bring his computers up to speed by providing a lot of information and creating a system that measures how suspenseful a story is and why. Despite appearing to be something of an abstract concept, the idea of suspense hinges on several key factors that can be taught to AI.

 

Dr. Brian O'Neill

DR. BRIAN O’NEILL

Assistant Professor of Computer Science

EDUCATION

Ph.D., Georgia Institute of Technology M.S., Georgia Institute of Technology B.S., Saint Joseph’s University

COURSES TAUGHT

Artificial Intelligence Data Structures Introduction to Programming Software Design Machine Learning (in development)

TOPICS TAUGHT

Heuristic Search Hill-climbing Search Game Playing Constraint-satisfaction Problems Logic Bayesian Networks Neural Networks Decision Trees Reinforcement Learning

PROFESSOR O’NEILL

has collaborated with Western New England students and faculty on AI research, including teaching AI how to recognize and re-create surprise and working on a program for an AI competition involving a solitaire card game.


“There’s a lot of psychological research on suspense, which is what we built from,” he says. “It’s somewhat measurable because it’s based on how likely it is that you think someone is going to get out of their dilemma. If the problem doesn’t seem to be a big deal, then you don’t feel much suspense. But if you don’t see how they’re going to get out of it, that creates tension. But there is a flip side. If you believe a character is truly doomed, you stop feeling suspense. It’s important that you also feel a sliver of hope for the character.

“I’ve also been trying to build on the same idea with teaching AI about surprise in my research here at Western New England. That’s been harder because surprise can operate in many ways. It’s more about what information you have and what information don’t you have, as opposed to what the character may or may not know.”

"We’re learning about what makes people creative and what people go through as a creative process by trying to reproduce it. AI is definitely a way for us to understand our own cognitive processes and what’s going on in our heads." - Dr. Brian O’Neill

However AI technology evolves in the artistic realm, Professor O’Neill doesn’t predict that AI will out-create humans. Rather, he sees it being a tool for people who want to create, whether it provides musical accompaniment to a saxophone player, or acts as a tutor to a budding singer. He envisions more of a human/AI team, rather than creative competitors.

Overall, Professor O’Neill feels that the ability of AI is underutilized, simply relegated to the tasks that we don’t want to do ourselves, such as vacuuming our homes or making appointments. In fact, he sees potential in AI doing more innately human tasks, not just for the sake of technological advancement, but also as a study of human behavior.

Programming AI isn’t always entering code and hoping it works. When researchers are teaching robots and computers to tackle a task that isn’t strictly objective, they have to understand how and why to do something, which forces the human programmer to dig deeper cognitively to give the computer an active frame of reference. For example, in order for AI to write a story, you must tell it how to write one. From a human perspective, there isn’t a concrete answer to this. This brings the need to understand ourselves on the deepest of levels, so that we can instruct the AI. Think of it as an ongoing psychology experiment.

Most industry experts discuss AI from the standpoints of the mechanics and machinery, or what it can do for the world of technology. However, even in the practical realms of AI, the human touch is always evident.

"Most research being done now is on machine-to-machine learning—taking the human element out. We feel that the human must always be at the center of it because there’s no way to codify value judgment." - Dr. Chris Ilacqua

Applying Augmented Reality to Storytelling

Dr. Chris Ilacqua ’82/G’84 is the senior director of analytics and AI at Qlik, a company that has created an AI program that allows users to compile and manipulate large amounts of data so that they can identify trends and needs in an easier way. He feels that Qlik’s success is that the program optimizes functions, but keeps humans involved in the decision-making.

 

Dr. Brian O'Neill

Dr. Chris Ilacqua ’82/G’84


Qlik gives users a platform to take their data and make predictions based on certain inputs, allowing companies an opportunity for the trial and error process of a product or service within the confines of an algorithm, rather than on store shelves or in the stock market. The AI used here is all about numbers, but the outcome is very much human.

“Our goal is to automate the mundane portion of any task so the customer can get to the storytelling aspect,” Dr. Ilacqua says. “We are storytellers by nature; that is a part of the human condition. So the question always becomes ‘how do we shorten the time between data acquisition to actual storytelling that changes behavior?’ We have more complex data than ever and less time to react to it, so by optimizing tasks, we give the humans more time to analyze and find the golden nugget of information and run with it.”

The storytelling in this case is the branding of a product or company, marketing materials, and press releases—anything that creates a connection to the product and makes the consumer feel a need to buy in. Additionally, by compiling data faster and in a way that gives a company the numbers and statistics to back up their product, they have more clout in the reliability and satisfaction department.

AI continues to revolutionize and simplify concepts like data mining and the business world is taking notice. Software such as SAP (Systems Applications and Products), which is used to integrate different business functions and departments of a company in order to create streamlined flows, are standard at many of the largest organizations. 

Knowledge of this intricate but incredibly effective program is so in-demand that Western New England University has added SAP courses and certification into the College of Business curriculum and applications to the Business Analytics and Information Management program are on the rise.

However, as AI filters into both our work and play, experts warn that they are not inherently unbiased; they inherit the biases of their programmer, whether that is the intention or not. This sometimes creates new solutions to our real-world problems that are ultimately no better than our old solutions.

“If your algorithm favors or doesn’t favor a particular group then that gets carried through to the AI,” Professor O’Neill cautions. “The AI doesn’t know any better and that is an issue. It’s important for people to recognize that AI is not a neutral party.”

However, programmers and developers can cut down on and adjust for bias by asking the AI to justify its decisions—the cyber-version of showing its work. By testing a program and having the computer explain how it came to certain conclusions and why, programmers can then spot the biases on a more objective level and adjust for them. 

It’s also critical in this aspect to note that any form of AI isn’t complete once it works to satisfaction. It needs constant tweaking and updating to keep up with technological advancements and societal norms, as well as federal laws and regulations. It’s these constant gains in technological superiority that tend to make people nervous.

Scripting the Future

On the future of AI, Dr. Ilacqua remarked, “It’s like having super powers—do you have the wisdom to use them properly? Are you using them for personal gain or to improve quality of life?”

As for a Terminator-style robot apocalypse, Professor O’Neill doesn’t see it happening in our lifetime.

“I’m not sold on it,” he insists. “We keep putting checks on technology because we demand perfection in its use. I’m hesitant that the level of super intelligence is actually going to be approachable. Knowing what we have trouble getting AI to do now, I have a hard time seeing anything like that ever happening. I feel like its just too far away at this point.”

So whether AI becomes more human, or simply performs human tasks, it’s safe to say the world as we know it will remain…for now.