For example, for signed or unsigned int representation, overflow occurs when an operation results in a value that's larger than the largest value or smaller than the smallest value.
Overflow can happen in floating point operations too. You can create a value larger than the largest magnitude value (which is +/- 1.11111111111111111111111 x 2128 in IEEE 754 single precision). Values larger than that might be rounded to infinity, which may not be what you want it to do.
There's also a related term that applies to floating point numbers: underflow. Underflow occurs when you perform an operation that's smaller than the smallest magnitude non-zero number. In IEEE 754 single precision this means a value which has has magnitude (i.e., absolute value) less than 1.0 x 2-149.
Such numbers are often rounded to 0, which may not be what you want to happen. While rounding to 0 has a small effect on addition, it has a large effect on multiplication.
You might wonder why the smallest magnitude IEEE 754 floating point number is not 1.0 x 2-127. That's because there's a bunch of numbers called denormalized numbers which allow floating point numbers to get very small, by sacrificing the number of significant bits. These numbers are smaller (in magnitude) than all of denormalized numbers.
Denormalized numbers are sometimes said to gradually underflow because of this phenomenon of losing bits as numbers get smaller and smaller.
Overflow and underflow are significant enough events of error, that your program may wish to detect this error state, or be alerted should it occur.