Artificial Intelligence : Definition, Category and History
- yassineproabc
- 24 juil. 2024
- 3 min de lecture
Dernière mise à jour : 24 sept. 2024
By Elimane Yassine SEIDOU

"Every aspect of learning and any characteristic of intelligence can be so precisely described that, in principle, a machine should be able to be made to simulate intelligence. Attempts will be made to find out how to create machines capable of using natural language, formulating abstractions and concepts, and solving types of problems usually reserved for humans, and even improving themselves."
John McCarthy, pioneer in the field of Artificial Intelligence
What is Artificial Intelligence ?
Artificial Intelligence (AI) is above all a science that combined mathematics and computer science. AI aims to treat a great quantity of data to enable the computer to solve problem like a human. For example, as human do, AI is able to analyze pictures and to describe them, to analyze texts, to perform recommendations...
A definition of AI could be : a set of mathematics and computer science rules that treats a great quantity of data to solve problems faster and with less errors than humans.
Category of Artificial Intelligence
We distinguish 3 great category of AI : Weak AI (Narrow AI), Strong AI (Generalized AI), Super AI (Conscious AI).
Weak AI : It's an AI applied to a specific activity for performing a specific task without ability to learn new tasks by its own
Strong AI : It's an AI applied to multiple activities and able to perform multiple tasks with the ability to learn by its own
Super AI : It's still theoretical. It's supposed to be an AI that is self-aware
History of AI
Early AI Development (1940-1956)
1943 : Warren McCulloch & Walter Pitts propose a neural model, foundational for AI.
1950 : Alan Turing proposes the Turing Test for machine intelligence.
1956 : AI officially coined at the Dartmouth Conference by John McCarthy, Marvin Minsky and Claude Shannon, Nathaniel Rochester and others.
First AI Boom (1956-1974)
1959 : John McCarthy creates the LISP (LISt Processing) language.
1966 : ELIZA, an early chatbot, demonstrates natural language processing (NLP).
1972 : Japan launches Fifth Generation project, an AI initiative.
AI Winter (1974-1980)
Decline in funding and optimism due to difficulties in producing tangible AI results.
AI Renaissance (1980-1990)
1980: Rise of Expert Systems like XCON in business.
1986 : Geoffrey Hinton and team introduce back propagation, advancing neural networks.
Machine Learning Era (1990-2010)
1997 : IBM’s Deep Blue defeats chess champion Garry Kasparov.
2006 : Geoffrey Hinton introduces deep learning through neural networks.
2011 : IBM Watson wins at Jeopardy!, showcasing machine understanding of human language.
Rise of Generative AI (2010-2020)
2014 : Ian Goodfellow develops GANs (Generative Adversarial Networks), enabling realistic image, video, and text generation.
2017 : Google’s Transformer architecture revolutionizes NLP, leading to models like BERT and GPT (developed by Openai).
2019 : OpenAI releases GPT-2, a powerful text generator.
Generative AI Explosion (2020-2024)
2020 : OpenAI’s GPT-3 is launched with 175 billion parameters, setting new standards for text generation.
2022 : DALL-E 2 by OpenAI generates images from text, expanding AI’s creative capabilities.
2023 : GPT-4 introduces multimodal capabilities, while Claude, Bard (now GEMINI), and LLaMA join the competition.
Growth Rate of AI (by time unit)
1950-1980 : Slow, 2-5% annual growth, focused on academia.
1980-2000 : Moderate, ~10% annual growth with expert systems.
2000-2015 : Fast, 30-40% annual growth due to breakthroughs in machine learning.
2015-2024 : Explosive, over 50% annual growth, with Generative AI driving adoption.
By YaxnAI




Commentaires