Information = Bits + Context

Introduction

In a computer, everything is represented using 1's and 0's. Thus, if someone asks you, what does the following 32 bit value mean:
1000 0000 0000 1111 1000 0000 0000 1111
You'd have to say "I don't know. Tell me the representation system".

In a computer, these bits could be

So, how does the computer know what the 0's and 1's stand for?

One (incorrect) possibility is that the machine "tags" the data. Thus, each bitstring contains some kind of prefix that lets you know if it's an integer, floating point, instruction, or whatever.

In reality, it's all context. There are no such tags. When a computer boots up, there's usually some default address it starts at. It "assumes" that address contains an instruction and starts executing. If the instruction at the address isn't valid, the CPU may cause an exception to occur (i.e., it panics when it sees an invalid instruction).

As the CPU executes instructions, those instructions access data in memory. The instructions may assume the data is an integer, or a float, or something else. The CPU has no means to determine whether the data that the instructions access has the correct type or not. It merely assumes it's correct. It interprets the 0's and 1's based on the type in the instruction. (The instructions may make some checks, but then it would have to be part of the program, not part of something the CPU does automatically).

Thus, 0's and 1's only have meaning depending on what context you interpret them. And fundementally, a CPU manipulates 0's and 1's and stores information in 0's and 1's.

Side Note

I borrowed the phrase "Information = Bits + Context" from a book titled "Computer Systems: A Programmer's Perspective" by Bryant and O'Hallaron. Clearly this is a "fake" formula. But it represents the idea that information in a computer is stored in 0's and 1's, which only meaning based on the context.

Web Accessibility