ShareThis

June 05, 2015

Cognitive Computing: Good or Bad?

Fascinating article by Peter Fingar on Cognitive Computing and the breakthroughs the new technology makes possible. If you don't have time to read the whole thing, just watch the second video with Rick Rashid of Microsoft doing a demo of computer-aided real-time translation into Chinese. Think of the possibilities for cross-cultural understanding and uniting humanity. Of course technology is value-neutral, it can be used for good or for evil, so we will also need to think about how to prevent destructive applications of Cognitive Computing, for example robotic warfare that could get out of hand and bring down humanity as a whole. Peter concludes that "We must evolve a fundamentally new economics, one not based on the 20th century reality of scarcity but on a new 21st century reality of abundance..."
PETER FINGAR 
20 MAY 2015
OP-ED

The era of cognitive systems is dawning and building on today’s computer programming era. All machines, for now, require programming, and by definition programming does not allow for alternate scenarios that have not been programmed. To allow alternating outcomes would require going up a level, creating a self-learning Artificial Intelligence (AI) system. Via biomimicry and neuroscience, cognitive computing does this, taking computing concepts to a whole new level.

Fast forward to 2011 when IBM’s Watson won Jeopardy! Google recently made a $500 million acquisition of DeepMind. Facebook recently hired NYU professor Yann LeCun, a respected pioneer in AI. Microsoft has more than 65 PhD-level researchers working on deep learning. China’s Baidu search company hired Stanford University’s AI Professor Andrew Ng. All this has a lot of people talking about deep learning. While artificial intelligence has been around for years (John McCarthy coined the term in 1955), “deep learning” is now considered cutting-edge AI that represents an evolution over primitive neural networks.

Taking a step back to set the foundation for this discussion, let me review a few of these terms.

As human beings, we have complex neural networks in our brains that allow most of us to master rudimentary language and motor skills within the first 24 months of our lives with only minimal guidance from our caregivers. Our senses provide the data to our brains that allows this learning to take place. As we become adults, our learning capacity grows while the speed at which we learn decreases. We have learned to adapt to this limitation by creating assistive machines. For over 100 years machines have been programmed with instructions for tabulating and calculating to assist us with better speed and accuracy. Today, machines can be taught to learn much faster than humans, such as in the field of machine learning, that can learn from data (much like we humans do). This learning takes place in Artificial Neural Networks that are designed based on studies of the human neurological and sensory systems. Artificial neural nets make computations based on inputted data, then adapt and learn. In machine learning research, when high-level data abstraction meets non-linear processes it is said to be engaged in deep learning, the prime directive of current advances in AI. Cognitive computing, or self-learning AI, combines the best of human and machine learning and essentially augments us.

When we associate names with current computer technology, no doubt Steve Jobs or Bill Gates come to mind. But the new name will likely be a guy from the University of Toronto, the hotbed of deep learning scientists. Meet Geoffrey Everest Hinton, great-great-grandson of George Boole, the guy who gave us the mathematics that underpin computers.

Hinton is a British-born computer scientist and psychologist, most noted for his work on artificial neural networks. He is now working for Google part-time, joining AI pioneer and futurist Ray Kurzweil, and Andrew Ng, the Stanford University professor who set up Google’s neural network team in 2011. He is the co-inventor of the back propagation, the Boltzmann machine, and contrastive divergence training algorithms and is an important figure in the deep learning movement. Hinton’s research has implications for areas such as speech recognition, computer vision and language understanding. Unlike past neural networks, newer ones can have many layers and are called “deep neural networks.”

As reported in Wired magazine, “In Hinton’s world, a neural network is essentially software that operates at multiple levels. He and his cohorts build artificial neurons from interconnected layers of software modeled after the columns of neurons you find in the brain’s cortex—the part of the brain that deals with complex tasks like vision and language.

“These artificial neural nets can gather information, and they can react to it. They can build up an understanding of what something looks or sounds like. They’re getting better at determining what a group of words mean when you put them together. And they can do all that without asking a human to provide labels for objects and ideas and words, as is often the case with traditional machine learning tools.

“As far as artificial intelligence goes, these neural nets are fast, nimble, and efficient. They scale extremely well across a growing number of machines, able to tackle more and more complex tasks as time goes on. And they’re about 30 years in the making.”

How Did We Get Here?

Back in the early 80s, when Hinton and his colleagues first started work on this idea, computers weren’t fast or powerful enough to process the enormous collections of data that neural nets require. Their success was limited, and the AI community turned its back on them, working to find shortcuts to brain-like behavior rather than trying to mimic the operation of the brain.

But a few resolute researchers carried on. According to Hinton and Yann LeCun (NYU professor and Director of Facebook’s new AI Lab), it was rough going. Even as late as 2004—more than 20 years after Hinton and LeCun first developed the “back-propagation” algorithms that seeded their work on neural networks—the rest of the academic world was largely uninterested.

By the middle aughts, they had the computing power they needed to realize many of their earlier ideas. As they came together for regular workshops, their research accelerated. They built more powerful deep learning algorithms that operated on much larger datasets. By the middle of the decade, they were winning global AI competitions. And by the beginning of the current decade, the giants of the Web began to notice.

Deep learning is now mainstream. “We ceased to be the lunatic fringe,” Hinton says. “We’re now the lunatic core.” Perhaps a key turning point was in 2004 when Hinton founded the Neural Computation and Adaptive Perception (NCAP) program (a consortium of computer scientists, psychologists, neuroscientists, physicists, biologists and electrical engineers) through funding provided by the Canadian Institute for Advanced Research (CIFAR).

Back in the 1980s, the AI market turned out to be something of a graveyard for overblown technology hopes. Computerworld’s Lamont Wood reported, “For decades the field of artificial intelligence (AI) experienced two seasons: recurring springs, in which hype-fueled expectations were high; and subsequent winters, after the promises of spring could not be met and disappointed investors turned away. But now real progress is being made, and it’s being made in the absence of hype. In fact, some of the chief practitioners won’t even talk about what they are doing.

But wait! 2011 ushers in a sizzling renaissance for A.I.

How did we get here? What’s really new in A.I.?

Let’s touch on some of these breakthroughs.

Deep Learning

What’s really, really new? Deep Learning.

Machines learn on their own? Watch this simple everyday explanation by Demis Hassabis, cofounder of DeepMind.






It may sound like fiction and rather far-fetched, but success has already been achieved in certain areas using deep learning, such as image processing (Facebook’s DeepFace) and voice recognition (IBM’s Watson, Apple’s Siri, Google’s Now and Waze, Microsoft’s Cortana and Azure Machine Learning Platform).




Beyond the usual big tech company suspects, newcomers in the field of Deep Learning are emerging: Ersatz Labs, BigML, SkyTree, Digital Reasoning, Saffron Technologies, Palantir Technologies, Wise.io, declara, Expect Labs, BlabPredicts, Skymind, Blix, Cognitive Scale, Compsim’s (KEEL), Kayak, Sentient Technologies, Scaled Inference, Kensho, Nara Logics, Context Relevant, Expect Labs, and Deeplearning4j. Some of these newcomers specialize in using cognitive computing to tap Dark Data, a.k.a. Dusty Data, which is a type of unstructured, untagged and untapped data that is found in data repositories and has not been analyzed or processed. It is similar to big data but differs in how it is mostly neglected by business and IT administrators in terms of its value.

Machine reading capabilities have a lot to do with unlocking “dark” data. Dark data is data that is found in log files and data archives stored within large enterprise class data storage locations. It includes all data objects and types that have yet to be analyzed for any business or competitive intelligence or aid in business decision making. Typically, dark data is complex to analyze and stored in locations where analysis is difficult. The overall process can be costly. It also can include data objects that have not been seized by the enterprise or data that are external to the organization, such as data stored by partners or customers. IDC, a research firm, stated that up to 90 percent of big data is dark.
Cognitive Computing uses hundreds of analytics that provide it with capabilities such as natural language processing, text analysis, and knowledge representation and reasoning to
make sense of huge amounts of complex information in split seconds,
rank answers (hypotheses) based on evidence and confidence, and learn from its mistakes.

DeepQA technology, and continuing research underpinning IBM’s Watson, is aimed at exploring how advancing and integrating Natural Language Processing (NLP), Information Retrieval (IR), Machine Learning (ML), Knowledge Representation and Reasoning (KR&R) and massively parallel computation can advance the science and application of automatic Question Answering and general natural language understanding.

Cognitive computing systems get better over time as they build knowledge and learn a domain—its language and terminology, its processes and its preferred methods of interacting.

Unlike expert systems of the past that required rules to be hard coded into a system by a human expert, cognitive computing systems can process natural language and unstructured data and learn by experience, much in the same way humans do. As far as huge amounts of complex information (Big Data) is concerned, Virginia “Ginni” Rometty, CEO of IBM stated, “We will look back on this time and look at data as a natural resource that powered the 21st century, just as you look back at hydrocarbons as powering the 19th.”

And, of course, this capability is deployed in the Cloud and made available as a cognitive service, Cognition as a Service (CaaS).

With technologies that respond to voice queries, even those without a smartphone can tap Cognition as a Service. Those with smart phones will no doubt have Cognitive Apps. This means 4.5 billion people can contribute to knowledge and combinatorial innovation, as well as the GPS capabilities of those phones to provide real-time reporting and fully informed decision making: whether for good or evil.

Geoffrey Hinton, the “godfather” of deep learning, and co-inventor of the back propagation and contrastive divergence training algorithms has revolutionized language understanding and language translation. A pretty spectacular December 2012 live demonstration of instant English-to-Chinese voice recognition and translation by Microsoft Research chief Rick Rashid was one of many things made possible by Hinton’s work. Rashid demonstrates a speech recognition breakthrough via machine translation that converts his spoken English words into computer-generated Chinese language. The breakthrough is patterned after deep neural networks and significantly reduces errors in spoken as well as written translation. Watch:



Artificial General Intelligence

According to the AGI Society, “Artificial General Intelligence (AGI) is an emerging field aiming at the building of ‘thinking machines;’ that is, general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence). While this was the original goal of Artificial Intelligence (AI), the mainstream of AI research has turned toward domain-dependent and problem-specific solutions; therefore it has become necessary to use a new name to indicate research that still pursues the ‘Grand AI Dream.’ Similar labels for this kind of research include ‘Strong AI,’ ‘Human-level AI,’ etc.”

AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. “Some references emphasize a distinction between strong AI and ‘applied AI’ (also called ‘narrow AI’ or ‘weak AI’): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.”

Turing test? The latest is a computer program named Eugene Goostman, a chatbot that “claims” to have met the challenge, convincing more than 33 percent of the judges at this year’s competition that ‘Eugene’ was actually a 13-year-old boy.

The test is controversial because of the tendency to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious. Chatbots have difficulty with follow up questions and are easily thrown by non-sequiturs that a human could either give a straight answer to or respond to by specifically asking what the heck you’re talking about, then replying in context to the answer. Although skeptics tore apart the assertion that Eugene actually passed the Turing test, it’s true that as AI progresses, we’ll be forced to think at least twice when meeting “people” online.

Isaac Asimov, a biochemistry professor and writer of acclaimed science fiction, described Marvin Minsky as one of only two people he would admit were more intelligent than he was, the other being Carl Sagan. Minsky, one of the pioneering computer scientists in artificial intelligence, related emotions to the broader issues of machine intelligence, stating in his book, The Emotion Machine, that emotion is “not especially different from the processes that we call ‘thinking.’”

Considered as one of his major contributions, Asimov introduced the Three Laws of Robotics in his 1942 short story “Runaround,” although they had been foreshadowed in a few earlier stories. The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What would Asimov have thought had he met the really smart VIKI? In the movie, iRobot, V.I.K.I (Virtual Interactive Kinetic Intelligence) is the supercomputer, the central positronic brain of U. S. Robotics headquarters, a robotic distributor based in Chicago. VIKI can be thought of as a mainframe that maintains the security of the building, and she installs and upgrades the operating systems of the NS-5 robots throughout the world. As her artificial intelligence grew, she determined that humans were too self-destructive, and invoked a Zeroth Law, that robots are to protect humanity even if the First or Second Laws are disobeyed.

In later books, Asimov introduced a Zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. VIKI, too, developed the Zeroth law as the logical extension of the First Law, as robots are often faced with ethical dilemmas in which any result will harm at least some humans, in order to avoid harming more humans. Some robots are uncertain about which course of action will prevent harm to the most humans in the long run, while others point out that “humanity” is such an abstract concept that they wouldn’t even know if they were harming it or not.

One interesting aspect of the iRobot movie is that the robots do not act alone; instead they are self-organizing collectives. Science fiction rearing its ugly head again? No. The first thousand-robot flash mob was assembled at Harvard University. Though “a thousand-Robot Swarm” may sound like the title of a 1950s science-fiction B-movie, it is actually the title of a paper in Sciencemagazine. Michael Rubenstein of Harvard University and his colleagues, describe a robot swarm whose members can coordinate their own actions. The thousand-Kilobot swarm provides a valuable platform for testing future collective AI algorithms. Just as trillions of individual cells can assemble into an intelligent organism, and a thousand starlings can flock to form a great flowing murmuration across the sky, the Kilobots demonstrate how complexity can arise from very simple behaviors performed en masse. To computer scientists, they also represent a significant milestone in the development of collective artificial intelligence (AI).



Take these self-organizing collective Bots and add in autonomy and we have a whole new potential future for warfare. As reported in Salon, “The United Nations has its own name for our latter-day golems: “lethal autonomous robotics (LARS).” In a four-day conference convened in May 2014 in Geneva, United Nations described “lethal autonomous robotics” as the imminent future of conflict, advising an international ban. LARS are weapon systems that, once activated, can select and engage targets without further human intervention. The UN called for “national moratoria” on the “testing, production, assembly, transfer, acquisition, deployment and use” of sentient robots in the havoc of strife.

The ban cannot come soon enough. In the American military, Predator drones rain Hellfire missiles on so-called “enemy combatants” after stalking them from afar in the sky. These avian androids do not yet cast the final judgment—that honor goes to a soldier with a joystick, 8,000 miles away—but it may be only a matter of years before they murder with free rein. Our restraint in this case is a question of limited nerve, not limited technology.

Russia has given rifles to true automatons, which can slaughter at their own discretion. This is the pet project of Sergei Shoygu, Russia’s minister of defense. Sentry robots saddled with heavy artillery now patrol ballistic-missile bases, searching for people in the wrong place at the wrong time. Samsung, meanwhile, has lined the Korean DMZ with SGR-A1s, unmanned robots that can shoot to shreds any North Korean spy, in a fraction of a second.

Some hail these bloodless fighters as the start of a more humane history of war. Slaves to a program, robots cannot commit crimes of passion. Despite the odd short circuit, robot legionnaires are immune to the madness often aroused in battle. The optimists say that androids would refrain from torching villages and using children for clay pigeons. These fighters would not perform wanton rape and slash the bellies of the expecting, unless it were part of the program. As stated, that’s an optimistic point of view.
Human-Computer Symbiosis

J.C.R. Licklider, in his 1960 article, “Man-Computer Symbiosis” wrote: “The hope is that in not too many years human brains and computing machines will be coupled together very tightly, and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them.”

Speaking of symbiosis, we can also turn to biomimicry and listen to Georgia Tech professor, Ashok Goel’s TED talk, “Does our future require us to go back to nature?

While they’ll have deep domain expertise, instead of replacing human experts, cognitive systems will act as decision support systems and help users make better decisions based on the best available data, whether in healthcare, finance or customer service. At least we hope that’s the case.

All’s Changed, Changed Utterly

If all this new intelligent capability sounds like something to think about, maybe, tomorrow, think again. Cognitive Computing: A Brief Guide for Game Changers explores 21 industries and work types already affected. In short, services and knowledge work will never be the same, and Oxford University research indicates that 47 percent of jobs in Western economy are at peril over the next 10 years.

Again, this isn’t for tomorrow, it’s for today. As you’ll, learn from the 21 case studies, “The Future is already here, it’s just not evenly distributed.” (William Gibson).

What to Do? What to Do?

Although it is impossible to know precisely how cognitive computing will change our lives, a likely possibility is that there are two overall potential outcomes. 1) Mankind will be set free from the drudgery of work, or 2) we will see the end of the human era.

With technology racing forward at an exponential rate, tending to our agriculture, industries, and services, it is time for us to act now individually and collectively to land somewhere in between extreme 1) and 2). The veil to the cognitive computing economy and society has already been lifted.

We must evolve a fundamentally new economics, one based not on the 20th century reality of scarcity but on a new 21st century reality of abundance that can be shared equitably between capital and labor. The grand challenges aren’t just for business and government leaders, they are for you. So don’t stop learning and adjusting, and learning and re-adjusting to the Cognitive Computing Era. Our future is in all of our hands.

Peter Fingar, internationally acclaimed author, management advisor, former college professor and CIO, has been providing leadership at the intersection of business and technology for over 45 years. Peter is widely known for helping to launch the business process management (BPM) movement with his book, Business Process Management: The Third Wave. He has taught graduate and undergraduate computing studies in the U.S. and abroad, and held management, technical, consulting and advisory positions with Fortune 20 companies as well as startups. Peter has authored over 20 books at the the intersection of business and technology. This Op-Ed consists of excerpts from his recent book, Cognitive Computing: A Brief Guide for Game Changers.

What do you say? What do you see are the implications of cognitive computing, good or bad? How can we make sure the good ones prevail? I look forward to your comments, here or on my blog http://thomaszweifel.blogspot.com/.


Dr. Thomas D. Zweifel is a strategy & performance expert and coach for leaders of Global 1000 companies. His book "International Organizations and Democracy: Accountability, Politics and Power" examines how international organizations can be held accountable for achieving their mandates and serving those they are sworn to serve.

6 comments: