Mesaic
Mesaic's View on the Web

AI: Science fiction or technological evolution?

Artificial intelligence (AI) has revolutionized how we live, communicate and work. In this article, we explore the long and controversial history of AI from Homer’s Iliad, through Turing’s universal machine, up to present day

Mary Sanford
by Mary
on

Artificial Intelligence (AI): one of the hottest and most enigmatic terms of the 21st century. Lots has been written and said on the topic, but much of this content does more to mystify than explain. That is because most content does not address AI’s multidimensional nature. AI is much more than the algorithms that implement it; for us, it is a critical method of value creation for our work in conversational commerce. At Mesaic, we value clarity and transparency when discussing AI and the ways in which it influences our technology, our business, and our clients. These values stem from our understanding of where AI comes from and the implications its history has for its future.

In an effort to share this knowledge and encourage others to consider our holistic perspective on AI, we are publishing a series of articles, of which this piece is the first, to tell the story of AI from its beginnings to its status today and future outlook.

This article will focus on the origins and evolution of AI as an idea and technological reality. It is by no means an exhaustive account and we will certainly gloss over some technical details, as our intention is to strike a balance between these details and the bigger picture of AI development. The next article in the series will talk about AI at Mesaic, and where we see our company developing with it in the future. The third and final article will discuss a few of the ethical controversies surrounding AI and the myriad arguments for and against further development.

„The beginning of our AI narrative stretches much farther back than one might suspect - all the way to ancient Greece.“

AI Origins - the “birth” of a new technology

The beginning of our AI narrative stretches much farther back than one might suspect - all the way to ancient Greece. Contrary to conceptions of AI as high-tech robots, super computers, and omnipotent algorithms in popular culture, the human obsession with automata and extra-human intelligence is literally ancient history.

Indeed, scholars agree that the first recorded description of something resembling AI appears in Homer’s epic Iliad, in which the god of metalworking, Hephaestus, creates forging assistants from gold to help him work. During the following centuries, authors, scientists, and philosophers continued to pursue and develop human understanding of intelligence in man-made objects. For example, the Banu Musa brothers and their Book of Ingenious Devices, published in 850 AD, introduced several automata prototypes. Leonardo da Vinci revealed plans for a robotic knight at the end of the 16th century, just before René Descartes presented the faculty of reason as the basic component of intelligence, in order to differentiate human intelligence from that of a machine: ‘I think, therefore I am.’

These developments excited many, but frightened many as well. Based on fear borne from a fundamental lack of understanding as to how these semi-intelligent objects functioned, many people advocated their destruction. The more religiously devout considered the development of intelligence in non-human objects a hubristic attempt to best the Almighty. The punishment that the doctor in Mary Shelley’s Frankenstein faces after he attempts to create artificial life epitomizes what some people believed would be the consequences of man’s attempt to usurp God’s authority on life and death. Despite these discouragements, infatuation with the idea of artificial intelligence continued to strengthen.

The point we would like the reader to draw from the preceding paragraphs is that the ideas and concepts that initially motivated experimentation in artificial intelligence lived and flourished in complete independence of the computers and digital technology architecture on which we rely today. However, we cannot deny the exponential progress in AI development that digitalization and the formalization of computer science theory enabled.

This part of AI history begins in 1931 with Kurt Gödel’s work on formal languages and the logic of proofs related to computation. Gödel’s contribution to the field of logic also established the fundamentals of theoretical computer science and at the same time, artificial intelligence. The next major development came five years later and extended Gödel’s work: Alan Turing’s proof on the uncomputability of the Entscheidungsproblem via his invention of the Turing machine (TM).

Turing’s conclusions in this paper became the abstract model of computation that inspired nearly all subsequent models, including those of present day. It also inspired the technology that allowed him and his team of cryptographers at Bletchley Park to crack the Enigma machine during World War II and help the Allies win, as portrayed in the recent motion picture The Imitation Game, starring Benedict Cumberbatch and Keira Knightley.

The Enigma Machine

The Enigma Machine

Nonetheless, Turing’s interest in computation extended farther than logic and mathematical proofs; it also inspired his beliefs in and arguments for the possibility of intelligent machines:

My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely. They will make mistakes at times, and at times they may make new and very interesting statements, and on the whole the output of them will be worth attention to the same sort of extent as the output of a human mind. (Alan Turing, 1996)

Before his death in 1952, Turing wrote a lengthy treatise debating the question ‘Can machines think?’ Turing’s paper lays out in precise detail a test that could be used to evaluate intelligence in digital computers and answer this question, also known as the Turing Test for machine intelligence, or the imitation game. He also provides counterarguments to eight major objections to machine intelligence, ranging from theological to physiological grounds; we will delve further into these objections in the next article, as many are still relevant today.

Turing’s 1950 paper played a critical role in shaping the trajectory of AI advancement in the latter half of the twentieth-century, and its philosophical conjectures continue to influence researchers in cognitive science and philosophy. Indeed, Turing’s work in the 1930s, 40s, and early 50s arguably progressed the development of the budding discipline of computer science more so than any other individual scholar. At the beginning of his academic career, people still thought of ‘computers’ as humans responsible for carrying out complex computations; by the end of his career, he and his colleagues had laid the foundation for digital computers and irrevocably changed the way humans think of the very word ‘computer.’

Still, the concept and development of AI lacked its own academic ‘home’ distinct from mathematics and logic. This changed in the mid-1950s, when some of the first (digital) computer scientists - including John McCarthy, Herbert A. Simon, Martin Minsky, and Allen Newell - formally christened the pursuit of intelligent machines as ‘artificial intelligence.’ This moment in AI history is significant for two main reasons: first, it marks the first time AI, as it has been known in various forms for centuries, became a unified discipline of study and research; and second, it officially situated this new discipline within the purview of general computer science, thereby allowing the two to develop alongside one another.

Algorithmic progress

Now that we have reviewed the history of AI preceding and immediately after the advent of digital computation, we can discuss the evolution of the algorithms used to implement AI over the last several decades.

Most of early AI and what is known today as traditional AI, is rooted in statistical methods, namely probability. As such, this kind of AI was experimentally applied to tasks such as classification, prediction, and categorization. However, the underlying statistical methods come with a set of requirements that inherently limit possible applications.

For example, probability-based AI needs large and diverse datasets in order to build valid and usable models. Often, entries in these datasets must also be previously labeled or classified prior to any development of AI models based on them. Requirements such as this significantly reduce general accessibility to AI for people or companies who lack the data resources to build proper models.

Additionally, most statistics-based AI models, especially those used in text and language analysis, rely on parameters that must be exogenously determined or estimated. That is to say, these models base their training on subjectively-selected parameters that inevitably bias their development for better or worse. This aspect of traditional AI severely hampers its internal and external validity, thereby also limiting the extent of its applications.

Nonetheless, computer scientists found ways to mitigate the negative impact of these drawbacks, in turn enabling further development and application of traditional AI to an impressive degree. You may have heard of some of these paradigms, the most well-known including neural networks and deep learning. However, the marginal benefit of each additional input of effort applied to optimizing these paradigms is decreasing, especially as these efforts also further obscure the explainability of these algorithms. Recognition of this problem motivated a new wave of researchers to develop AI paradigms that avoid the disadvantages of traditional AI while simultaneously opening up new applications and drastically increasing accessibility.

New AI paradigms focus on pairing neural networks and deep learning with evolutionary algorithms that allow AI not only to make predictions based on data but to additionally generate new information. These models do not require large volumes of data nor exogenously determined hyperparameters. This new wave of AI is therefore a much more sustainable direction for AI development that will hopefully continue to break down barriers of accessibility to the technology.

„The ideas and concepts that initially motivated experimentation in artificial intelligence lived and flourished in complete independence of the computers and digital technology architecture on which we rely today.“

AI, advancements today

In this final section, we provide a few examples of current AI applications that illustrate the twenty-first century culmination of the age-old obsession.

Business

It is nearly impossible to imagine an industry untouched by AI-driven innovation. Applications range from smarter analytics and resource management, to automated and optimized customer service practices. These innovations prove their value by reducing costs, improving customer relations, and thereby reinforcing customer loyalty and increasing revenue.

Applications such as natural language processing (NLP) allow companies to build and deploy smarter chatbots that quickly recognize customer intent and direct him or her to the proper customer service representative. This pipeline minimizes stress and time spent in unproductive interactions while maximizing customer satisfaction. Data is the new business currency. Recent developments in machine learning (ML) can help companies learn from their data, draw actionable insights, and optimize their business operations in real-time.

Lastly, several companies now use AI to power their marketing strategies. With the help of AI, marketing companies can turn user data into high-precision targeting recommendations for individual campaigns that can boost efficiency and key performance indicators (KPIs) significantly.

Healthcare

In recent years, healthcare data has multiplied many times over and is now, in 2018, said to double every sixty days. To manage this data deluge, several companies including IBM, Google, and Apple, have begun developing applications of AI to facilitate various healthcare operations. For example, IBM has spent billions of dollars developing Watson Health, an AI-powered medical advisor trained on medical textbooks, journals, and by expert professionals in order to provide better medical care by extending the expertise of doctors and nurses.

The idea is that Watson, with its vast storage capacities and next-level ‘cognitive’ intelligence, can keep track of new medicine and treatment programs more reliably than humans. However, IBM stresses that this technology is not designed to replace human intelligence, rather to support and augment it.

Other applications of healthcare in AI include drug discovery, autonomous robotic surgery, personalized medicine, and image diagnostics, to name a few. While many remain skeptical of allowing machines to influence medical decisions due to a lack of explainability or algorithmic accountability, i.e. AI cannot explain why it arrives at a given decision versus another, many companies continue to spend tremendous amounts of time and money on developing this technology and on convincing the skeptics why AI-powered medicine is the new frontier of healthcare.

Applications of AI abound in the social media platforms and search engines we rely on everyday. AI algorithms leverage patterns in our online behavior to make predictions about what we want to see on our Facebook, Twitter, and Instagram feeds; what we are likely to be searching for given our previous queries, geographic location, and basic demographic information; as well as push new content that the algorithms calculate to have the highest probability of capturing our attention. All of these insights derive from traces of our online activity that platforms such as Google Analytics capture, analyse, and sell to third-party companies.

What's next? This article covered a lot of ground; from the first recorded reference to artificial intelligence in ancient Greece, to our present day of AI-flavored everything. AI technology increases in complexity everyday; its roots and origins do not. Perhaps understanding the latter better will help us as humans keep track of the ethical quandaries generated by AI development, and ensure that what we create will not ultimately lead to our subversion. In a subsequent article, we explore some of the ethical controversies surrounding AI and how the views of leaders from industry and academia have changed (or not) along the thread of AI development through history. Before we get into that conversation, we will present our current work and future ideas for AI at Mesaic in the next article. Stay tuned!

„Perhaps better understanding the origins of AI will help us as humans keep track of the ethical quandaries generated by AI development, and ensure that what we create will not ultimately lead to our subversion. “
Mary Sanford
Mary
Mary combines her AI specialization with a diverse background in data science, social science, and neuroscience to design innovative, AI-powered business tools.

Mesaic Blog Update

The newest blog post. Every second week. No sales talk. Just insights.

Subscribesend
Error
Success
Facebook
Twitter
Xing
LinkedIn
🍪

We use cookies in order to provide you with a most comfortable website visit. By using this website you agree to our cookie guidelines