What is big data?

We live in a world where everything is measurable and where people and every device we can think of
are connected to the Internet 24/7. This network of connections and sensors generates an incredible
amount of data and offers a wealth of new and fascinating possibilities that, collectively, we call “big
data”. In other words, the term big data refers to enormous sets or databases that are so large and
typologically diverse that they surpass the analytical capabilities of standard data processing programs,
and therefore, advanced artificial intelligence tools need to be used for these purposes.
The key aspects of big data are:
● Volume. It refers to the amount of data generated and stored. This quantity determines the
value and depth of insight, as well as whether something can be considered big data or not. The
size of big data is usually larger than that expressed in terabytes.
● Velocity. It refers to the speed at which data is generated and processed in order to respond to
the demands and challenges faced by modern society. Big data is usually available in real-time
or near real-time.
● Variety. It refers to the types and nature of data. Earlier technologies could only efficiently
process structured data. However, the transformation of the type and nature of data from
structured to semi-structured or unstructured has called into question the existing tools and
technologies. Big data technologies have evolved with the primary intention to capture, store
and process semi-structured and unstructured data that are generated at high speed and whose
quantity is immense.
The term big data has been in use since the early 1990s. Although it is not known exactly who first
coined this term, the person considered to be the most responsible for its popularization is computer
scientist John Mashey.