WHAT IS AI OR ARTIFICIAL INTELLIGENCE?

WHAT IS AI OR ARTIFICIAL INTELLIGENCE?

Artificial intelligence (AI),  is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car) or build smart machines capable of performing tasks that typically require human intelligence.

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly and the long-term goal of many researchers is to create general AI (AGI or strong AI).

The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as car diagnostics,  search engines, and voice or handwriting recognition.


Early Milestones in AI: The First Artificial Programs

The first successful AI program was written by Christopher Strachey in 1951, who became later director of the Programming Research Group at the University of Oxford.  His checkers program worked on the Ferranti Mark I PC at the University of Manchester in England and by the summer of the 1952  this program was so advanced that you could play a complete game of checkers at a reasonable speed.

First article about this was published by the end of the 1952. Shopper , developed by Anthony Oettinger at the Cambridge was used on the “electronic delay storage automatic calculator” (EDSAC computer).
Shopper`s simulated world was a small of eight shops. When instructed to purchase an item, Shopper would search for it, visiting shops at random until the item was found. While searching, Shopper would memorize a few of the items stocked in each shop visited (as if some people had done it). The next time Shopper was sent to look out for some item, or something else that was already previously located by him/her, it would go to the right shop right away.

The first AI program to run in the US was also a checkers application that was developed by Arthur Samuel in 1952 for the prototype of the IBM 701. Samuel took over the essentials of Strachey’s checkers program and developed it more advanced over the years. In 1955 he added features that enabled the program to learn from experience. Samuel included mechanisms for both rote learning and generalization, enhancements that eventually led to his program’s winning one game against a former Connecticut checkers champion in 1962.


There are Four Types of Artificial Intelligence
Reactive Machines

A reactive machine follows the most basic of AI principles and, as its name implies, is capable of only using its intelligence to perceive and react to the everything it sees and is in front of it. A reactive machine cannot store a memory and as a result cannot rely on past experiences to inform decision making in real-time.

Perceiving everything  in the world directly means that reactive machines are designed to complete only a limited number of activities. Intentionally narrowing a reactive machine’s worldview is not any sort of cost-cutting measure, however, and instead means that this type of AI will be more trustworthy and reliable.

A famous example of a reactive machine is Deep Blue, which was designed by IBM in the 1990’s as a chess-playing supercomputer and defeated international grandmaster Gary Kasparov in a game. Deep Blue was only capable of identifying the pieces on a chess board and knowing how each moves based on the rules of chess, acknowledging each piece’s present position, and determining what the most logical move would be at that moment. The computer was not pursuing future potential moves by its opponent or trying to put its own pieces in better position. Every turn was viewed as its own reality, separate from any other movement that was made beforehand.

Here is an example of a game-playing reactive machine is Google’s AlphaGo. AlphaGo is also incapable of evaluating future moves but relies on its own neural network to evaluate developments of the present game, giving it an edge over Deep Blue in a more complex game. AlphaGo also bested world-class competitors of the game, defeating champion Go player Lee Sedol in 2016.

Though limited in scope and not easily altered, reactive machine artificial intelligence can attain a level of complexity, and offers reliability when created to fulfill repeatable tasks.


Limited Memory
Limited memory artificial intelligence has the ability to store previous data and predictions when gathering information and weighing potential decisions looking into the past for clues on what may happen at some point in the future. Artificial intelligence with limited memory  is a bit more complex and it has far more possibilities than reactive machines.

Limited memory AI is created when someone trains a model in how to analyze and utilize new data or an AI environment is built so models can be automatically trained and renewed. When utilizing limited memory in machine learning, you have to follow next important steps:

  • You have to create training data
  • You have to create machine learning model
  • Created model must be able to make predictions
  • Model must be able to receive human and environmental feedback
  • That same feedback must be stored as data
  • You have to reiterate as a consistent cycle

There are a few three major machine learning models that utilize limited memory AI: Reinforcement learning, which learns to make better predictions through repeated trial and error Long Short Term Memory (LSTM), which utilizes past data to help predict the next item in a sequence. LTSMs view more recent information as most important when making predictions and discounts data from further in the past, though still utilizing it to form conclusions.

Evolutionary Generative Adversarial Networks (E-GAN), which evolves over time, growing to explore slightly modified paths based off of previous experiences with every new decision. This model is constantly in pursuit of a better path and utilizes simulations and statistics, or chance, to predict outcomes throughout its evolutionary mutation cycle.


Theory of Mind
Theory of Mind is just like you heard – theoretical. We have not yet achieved the technological and scientific capabilities necessary to reach this next level of artificial intelligence.  The concept is based on the psychological premise of understanding that other living things have thoughts and emotions that effect the behavior of one`s self. In terms of AI machines, this would mean that AI could comprehend how humans, animals and other machines feel and make decisions through self-reflection and determination, and then will utilize that information to make decisions of their own. Essentially, machines would have to be able to grasp and process the concept of “mind”, the fluctuations of emotions in decision making and a litany of other psychological concepts in real time, creating a two-way relationship between people and artificial intelligence.


Self-awareness
Once “theory of mind” can be established in artificial intelligence, sometime well into the future, the final step will be for AI to become self-aware. This kind of artificial intelligence possesses human-level consciousness and understands its own existence in the world, as well as the presence and emotional state of others. It would be able to understand what others may need based on not just what they communicate to them but how the communicate it.

Self-awareness in artificial intelligence relies both on human researchers understanding the premise of consciousness and then learning how to replicate that so it can be built into machines.


AI Safety?

In the long term, you have to ask yourself is what will happen if the quest for strong AI succeeds and AI system becomes better than humans at all cognitive tasks. As pointed out by I.J.Good in 1965, developing smarter AI systems is itself cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence dancer leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.
Here comes some questions: whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. As of today, we have to recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or not cause great harm to the whole world. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.


Conclusion

Artificial Intelligence and Machine Learning are products of both science and myth. The idea that machines could think and perform tasks just as humans do is thousands of years old. The cognitive truth expressed in AI and Machine Learning systems are not new either. It may be better to view these technologies as the implementation of powerful and long-established cognitive principles through engineering.

We should accept that there is a tendency to approach all important innovations as a Rorshach test upon which we impose anxieties and hopes about what constitutes a good or happy world. But the potential of AI and machine intelligence for good does not lie exclusively, or even primarily, within its technologies. It lies mainly in its users. If we trust how our societies are currently being run then  we have no reason not to trust ourselves to do good with these technologies. And if we can suspend presentism and accept that ancient stories warning us not to play god with powerful technologies are instructive then we will likely free ourselves from unsessasery anxiety about their use.

Leave a Comment

Sinu e-postiaadressi ei avaldata.