1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. C# Data Types and Variables

Numbers

Contents

keyboard_tab
Course Intro & Overview
1
Introduction
PREVIEW49s
2
Overview
PREVIEW1m 18s
Summary

The course is part of this learning path

A Practical Introduction to C# Programming
6
3
1
Start course
Overview
Difficulty
Beginner
Duration
50m
Students
131
Ratings
5/5
starstarstarstarstar
Description

In this course, we look at how different types of data are stored using variables within a C# program. C# is a strongly typed language, meaning when you manipulate data in code, you must keep the data in variables that are specifically designed to hold that kind of data. For example, text is stored in a string data type and a letter in a char. There are over ten different numeric data types that vary in size and accuracy or precision of the data they can faithfully represent. We investigate some of the quirks in dealing with fractional numbers in a computer's binary environment. There are in-depth code examples for each of these topics to illustrate the discussed concepts and make you more familiar with C# programming in general.

This course builds upon the key concepts and examples covered in the Introduction to C# course. It includes guided demonstrations to give you practical knowledge of how to handle the concepts covered.

Learning Objectives

  • Understand what variables are and how they're stored
  • Learn about data types for storing and manipulating text values
  • Learn about the various data types for storing and manipulating whole and fractional numbers
  • Learn about variables for storing multiple values of the same data type

Intended Audience

This course is intended for anyone who has a basic understanding of C# and now wants to build upon that knowledge.

Prerequisites

This course carries on from our Introduction to C# course, so we suggest taking that one first if you haven't already done so.

 

Code examples used in demos on GitHub

https://github.com/cloudacademy/csharp-datatypes-variables

 

Transcript

Representing numbers in programs is reasonably straightforward, but it is a case of size does matter. Numbers come in two varieties; integer and real. If maths is not your thing, and let's be honest, it isn't for most people, integer numbers are whole numbers in that they don't include fractions but do include negative numbers. If you are a mathematician, yes, strictly speaking, whole numbers are only zero and positive numbers. Real numbers include integer numbers and pretty much everything else, so rational and irrational numbers, as in fractions.

Cast your minds back to bits and bytes and the number of values they can represent, and you realize why variable size does matter when it comes to numbers. Numbers are also signed or unsigned. Signed includes negative numbers, and unsigned is only positive numbers. In practical terms, this translates into signed numbers being smaller for the same number of bits as one bit indicates whether the number is positive or negative.

C#, as with several other languages, has a data type for signed and unsigned numbers from one to eight bytes or eight to 64 bits. This table lists the various integer data types. In terms of size, from smallest to largest, it goes byte, short, int for integer, and long. With the exception of byte, each type's default is signed, so the unsigned variant is prefixed with a U for unsigned.

You might be thinking, how will I know which type to use? What if I'm doing a calculation using a short and it turns out I need a bigger data type? For the most part, int is the go-to datatype as it's got you covered from negative 2 billion to positive 2 billion. If you know you will be dealing with big numbers like astronomical or national debt calculations, then use long.

Many of the smaller data types date back to the early days of computing, where memory space was at a premium. Relatively speaking, the size of number data type variables is inconsequential. If in doubt, go big. Long has you covered from negative to positive 9 quintillion in American speak or trillion in old school, that is 10 to 18th power.

Of course, integers don't cater to our every need, and when we need fractional or rational numbers, we use floating-point data types. Floating-point numbers need to deal with size, like integers and precision, the accuracy of representing a tiny fraction, the digits to the right of the decimal point. Float and double, the two smallest floating-point types, have a larger range than decimal but are less precise.

Precision, or lack of it, as we can see from the precision column, relates to possible rounding errors in calculations and number representation. At first glance, this doesn't seem intuitive, but you can think of it as two rulers or tape measures. A float or double ruler is longer than a decimal ruler, but they only have significant marks, say centimeters, whereas the decimal ruler has not only millimeters but also micrometers. You can measure bigger things with the double ruler, just not precisely as with the decimal one.

There are two issues we run into when dealing with fractional numbers. One is the decimal representation of numbers like 1/3, where outside the computer, we can use three dots to indicate 0.33 recurring or a dot or bar over the last digit. The other possible more pressing problem is that computers work in binary, i.e., bits, and representing decimal fractions in bits isn't ideal.

Float and double are binary representations of fractional numbers, while decimal is, as the name says, a decimal or base 10 representation. Why are binary fractional numbers not ideal? Well, it's the same problem as with 1/3 in base 10. With binary numbers the computer uses binary arithmetic, 0.1 is produced by dividing one by 10 using binary division. I won't bore you with the details of binary long division here, but the upshot is a recurring binary number that goes on ad Infinitum, so not exactly 0.1, but close.

Most of the time, explicitly rounding with .NET's Round function will be sufficient. A base 10 decimal number is a lot more precise, but calculations take longer to perform, and as I said, there is a loss in range. In the context of 15 decimal places for double and 28 for decimal, most of the time, precision-related problems won't be an issue. Still, it is something to keep in mind if you perform multi-stage calculations, where one operation's products are inputs to another. If precision is paramount over performance, use decimal.

In terms of assigning literal values to a floating-point variable, you append a D to a double, an F to a float, and an M to a decimal. In all cases, these letters can be either lower or upper case. You can mix integer numbers with floating-point in calculations, with the result being a floating-point. As you would expect, you cannot mix float or double with decimal without explicitly converting all operands to the same data type.

About the Author
Avatar
Hallam Webber
Software Architect
Students
14880
Courses
27
Learning Paths
3

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard. 

Covered Topics