What this guide is:
Basic understanding of programming terms.
Basic understanding of programming concepts.
More reference than a guide.
What this guide isn't:
How to program something.
How to make 1337 haxx.
Who this is for:
People who learned how to program by themselves (seriously)
People who have never programmed before, and want to see what it's about.
This guide is about giving some general concepts that you will see in all programming languages. If you have never programmed before, and want to get in to it, I would suggest reading this guide for some background information so you understand what is going on.
So let's begin.
Part 1: Memory, bytes, bits, and binary.
A good place to start is memory. Memory stores actively accessed information. A smart man (whose name I forgot) came up with the idea that data that doesn't need to constantly be accessed can run slowly while data that needs to be accessed a lot must run quickly. This is very much in use today; with hard disks (slow, accessed less often) and RAM (fast, accessed a lot).
Memory holds data for the program. It stores variables, information, and whatever else a program needs to run. You may know memory better as "RAM", or Random Access Memory. RAM is divided in to bytes.
You have more than likely heard this term before. Each byte in RAM has exactly one address, and stores exactly one value. A byte can be divided even smaller, into what are called bits.
Bits are parts of bytes, as stated before. Each bit can have one of two values: 1 (on) or 0 (off). This is where binary comes in. Yes, you will learn the concept of binary. It's not that hard.
So let's learn some binary, shall we?
You know how to count to 10, right? And 100?
So in the number 10, how many digits do you have? 2.
What does that first digit represent?
It represents the number of tens there are in the number.
So in the number 34, there are three tens and four ones.
Binary works very similarly, except instead of 10 possibilities for a digit (0-9), there are only 2 (0 and 1).
Here's our 8 bits.
0 0 0 0 0 0 0 0
That last bit represents the number of 1's.
0 0 0 0 0 0 0 1 = 1
The one next to it represents the number of 2's.
0 0 0 0 0 0 1 0 = 2
The one next to it represents the number of 4's.
0 0 0 0 0 1 0 0 = 4
The one next to it represents the number of 8's.
0 0 0 0 1 0 0 0 = 8
Are you noticing a pattern?
Each place you go, you increase by a power of 2 each time.
So the leftmost digit in a binary number represents the 128th's place.
2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0
0 0 0 0 0 0 0 0
128 64 32 16 8 4 2 1
Have some practice for yourself with binary numbers.
One more important thing:
1 1 1 1 1 1 1 1
This is the maximum value for a single byte.
Don't bother working it out, I'll just tell you that it's 255.
In this next section, I will cover how these bytes work together to make bigger types of data (called datatypes, so original).
So what have we learned so far?
1) RAM is divided into many, many different sections called bytes.
2) Bytes are comprised of 8 bits.
3) Bits have 2 positions: 1 (on) and 2 (off).
4) These combinations of on's and off's work together to make up binary.
5) The maximum value for a byte is 255.
Part 2: Primitive data types
Bytes have a max value of 255. So I'm sure there's one of you reading this wondering how you could possibly have a number that's above 255 on a computer. They do it all the time, but how?
The answer, my friend, is datatypes.
Many programming languages have different sets of datatypes, yes, but here are the most common "primitive" data types.
Facts about integers:
-They are signed or unsigned (more about that later).
-They cannot have decimals.
-There are 4 commonly used integers.
Hey, we know what this is! It's a single byte. Its value is between 0 and 255. Not much else to say.
Also called a 16-bit integer, or if you're oldschool, also called a Word. This is comprised of, well, 2 bytes. Hence, the name 16-bit integer (2 * 8 = 16). It can hold values from -32768 to +32767.
One of the most common datatypes, and probably (if not the) most used integer. It uses 32-bits, and 4 bytes in memory. Oldschool name is a DWord, or Double word. It can hold values from -2 ^ 31 to 2 ^ 31 - 1. Yeah, it's somewhere around 2.1 billion, in case you were wondering. Oh, but wait, it gets better and even larger numbers. Get ready for...
Long values have 64 bits, and are 8 bytes long. They range from -2 ^ 63 to 2 ^ 63 - 1. I honestly have no idea as to what number that would be. I have no idea, and I have never needed to know. Longs are not uncommon, but not common at the same time. They just pop up every once in a while.
Facts about floating point numbers:
-They are decimal numbers.
-There are 2 commonly used floating point numbers
Floats hold 4 bytes (I believe) and have an accuracy that differs. All you need to know that they are accurate, but should not be used for very precise math.
Doubles hold 8 bytes and also have an accuracy that differs. They are called "doubles" because they are double-precision floating point numbers. They are more accurate than floats at the cost of more memory used.
Facts about single byte values:
-They take up, well, only one byte.
-There are 2 of them that are commonly used (and are not the "byte" datatype stated above).
A boolean value holds a true or a false. Usually, 1 is held for "true" and 0 is for "false". In some programming languages, however, booleans are considered as "false" if their value is 0, and "true" if their value is anything else.
Generally referred to as "char". They take up one byte, and use the ASCII encoding (pronounced ask-ee). This includes all English alphabet letters. Each letter, both upper case and lower case, has their own unique value. 255 characters in all.
That's all about the most common primitive data types. These are somewhat complicated, so don't feel stressed if you don't remember everything. Even I, a no-life loser who knows most of these by heart, had to look some stuff up to confirm some thoughts that I had.
One final note:
Signed vs unsigned
If you start in a less complex programming language, you won't hear much of this, but it is a very important concept to understand.
Signed numbers can go negative.
Unsigned numbers cannot go negative.
In fact, the integer value of -1 isn't really -1 at all, but it is 2^32.
The way signed numbers work is that halfway through their range, it goes into the negatives and starts counting up again.
So 32766, 32767, -32768, -32767...
The difference between signed and unsigned numbers really is only significant with small amounts of Java and C#, and much more so with C++, C, and assembly.
Here's a quiz for you.
1. How many bytes does an "int" datatype have? How many bits?
2. Name all four integer datatypes.
3. How many bytes does a float use in memory? How many does a double use?
4. What does a boolean represent?
5. What is the range of a short?
What we have learned in this part:
1) Bytes are arranged in fashions to make them more useful, called datatypes.
2) There are 4 commonly used integer types.
3) There are 2 commonly used floating point number types.
4) There are 2 commonly used single byte types.
5) ASCII is the table of characters assigned to unique values.
I know, I know, it's a short guide. But I can't think of much else to put in here. I wanted it to be a beginner's guide and help for people who have a few holes in their knowledge about programming. It's helpful to know.
Leave me feedback. I want to know what people think, and whether or not I should make a follow-up intermediate guide.
Thanks for reading.