Beginner’s Intro To AI

Surmount AI
6 min readSep 24, 2021

Artificial intelligence is a hot topic these days. For those who are already in the know, it’s easy to see why: AI can do everything from organizing your morning commute and making dinner reservations for you, to identifying diseases and automating your investments. (*wink*)

They’re even learning how to speak English better than most people!

Most people still think that they need to be a data scientist or computer engineer to understand and utilize AI, but even if you don’t have any technical experience or knowledge about what exactly goes into building an intelligent machine, the key concepts are easy enough to grasp for anyone without a technical background!

In this article, we’ll provide simple explanations for underlying concepts and principles of artificial intelligence as well as their brief history and main areas of application.

Definition of AI

Although artificial intelligence is widely discussed these days, there is no set definition for this technology. If you look across definitions in Cambridge, Britannica, Oxford, or Merriam-Webster dictionaries, you will notice that those differ slightly. However, it is still possible to draw the common features that all of them mention. Those are:

  • Seeing artificial intelligence as both a theoretical discipline and a practical approach to creating machines and/or computer system;
  • A set of methods to mimic functions of human cognition with those systems. (For example, the ability to recognize and understand speech or pictures, make decisions, solve problems, discover meanings, reason, learn from the past to assume the future, etc.)

For a wide audience, the term ‘artificial intelligence’ is typically associated with human-like robots or something similar like voice input or typing in the chatbot window, but the better way to think of general artificial intelligence would be as an abstract bundle of algorithms that can be integrated into almost any interface to suit one’s needs, giving it great flexibility in compatibility with various software and hardware.

When was AI born?

The theories about what human intelligence is and whether it can be mimicked has been the topic of discussion for humanity for hundreds of years. However, in recent decades, we’ve started to approach a new frontier: artificial intelligence (AI). The idea was first coined in 1956 at Dartmouth College during an academic conference held near Boston — this became known later on because this term would go onto become its own separate field with many discoveries made possible through advances found within AI research.

Ever since, scientists have been focusing on training machines in imitation of human reasoning. However, despite optimistic outlooks at first, this field has seen large fluctuations in research and funding, resulting in so-called “AI winters” when the interest in this type of research was close to zero. The last surge in popularity that has continued to grow up until today came when IBM’s Deep Blue won a chess tournament against the world champion, Garry Kasparov, in 1997; followed by IBM’s Watson that won ‘Jeopardy’ in 2014 competing against the two best players at that time, demonstrating that machines can now compete fairly well for human tasks like natural language processing (NLP).

So, what are the key concepts of artificial intelligence? What are the technologies we should keep in mind when we speak of it?

Main ideas and concepts

When discussing artificial intelligence, people often mention machine learning and data science. These terms are often used interchangeably but have different meanings to them which is worth bearing in mind when thinking about how they could help your business grow or improve certain aspects of it

A lot has been written on the topic over recent years with one definition for AI coming from Stanford University faculty as follows: “Artificial Intelligence…refers broadly speaking both mental computing power applied by machines to actions requiring sensitive human-like decision making under uncertainty” (Steinert & Horvitz 2013). This means that while there will undoubtedly be benefits arising from this technology through increased productivity among other things; its impact may also prove to be less than ideal. Ultimately, it’s a matter of what humans decide to use the technology for.

As we have mentioned above, artificial intelligence is generally seen either as a theoretical field of study or as an applied discipline that tries to replicate human cognition. While this term has gained a great marketing value and sells well, most of the time when someone mentions artificial intelligence, what they actually mean is machine learning.

Machine learning is a subset of knowledge within artificial intelligence that studies and develops methods of training programs in mimicking intellectual processes in a human brain. Forecasting, behavior assessment, modeling, etc are all possible examples of machine learning. Furthermore, machine learning encompasses deep learning as a sub-discipline and one of the methods to train artificial intelligence. What makes this approach stand out from the others is that deep learning can run without human supervision and work with unstructured data.

Nevertheless, both supervised and unsupervised training are done through data sets of various kinds. The more data an algorithm has as a reference, the better outcomes it can & will produce. Over time, algorithms can learn from the previous experience and extrapolate that knowledge on the new data sets. These new sets will in turn train algorithms even better and so on. An algorithm can be given millions or even billions of examples from previous cases where a certain action happened as part of its training process . This allows algorithms that use supervised learning (training upon statistical models) access information on how specific actions were successful while attempting those same moves during future situations- this improves accuracy over time because there are no errors left unsuppressed by new input.

Where does the data for learning sessions come from? This is a task for data scientists. Data science a field that deals with the extraction of knowledge from structured and unstructured data. More specifically, data scientists deal with extremely large volumes of information known as big data, a term that has become popularised in recent years, to separate necessary types of data sorted in specific ways so that it could be used in further research.

Now, when we have figured out the connections between data science, artificial intelligence, machine and deep learning, we can talk about the most exciting thing. If we are to make machines mimic human cognition, it is only natural that to do so we choose systems that mimic the structure of the organ that is responsible for cognition — neural networks.

They replicate structures and processes in the human brain — the original neural network that consists of neural notes and can have many layers. The more layers you have in your artificial neural network, the more analytical power you get. Still, even today, the most powerful computers are lagging behind the power of the actual human brain because it cannot have as many layers and connections as a human brain, and it will take some time before AI can catch up with humans.

In Conclusion

As you can see, it doesn’t necessarily require a lot of in-depth technical knowledge to grasp the basic ideas of artificial intelligence and the core areas of its application. However, the implementation of AI-based solutions does require a very specific set of skills to bring the full potential of this amazing technology to life. If you have any ideas or areas of interest related to artificial intelligence and how it can be integrated with your project, please feel free to reach out to contact@surmount.ai and let’s make it happen together!

--

--