We know about Numeric and decimal datatype in sql server 2005 now lets see the difference between Numeric and Decimal datatype. Also we will discuss here when to use Numeric and decimal datatype with some examples
Difference Between Numeric and Decimal Datatype
Numeric data type prefer to use when we have fixed precision and scale in the number.
Syntax: decimal[ (p[ , s] )] and numeric[ (p[ , s] )]
Both numeric and decimal are similar to each other in terms functionality
P-stands for precision
The total number of digit in the decimal or numeric data type including digit after decimal point. The precision start from 1 to maximum precision of 38. The default precision is 18.
S-stands for scale
The maximum number of digits that can be stored after decimal point. Scale value should be in between from 0 to p-1. Scale can be specified only if precision is specified. The default scale is 0.
Storage varies based on precision
1-9 -> 5 bytes
10-19 -> 9 bytes
20-28 -> 13 bytes
29-38 -> 17 bytes
Example: declare @decimal decimal(2,1)
set @decimal = 1.2
select @decimal
Here precision is 2 that means it will hold only 2 digits and scale is 1 which will allow one more digit after decimal (must be less precision value).
If you increase scale value equal to 2 then it will throw below error
Msg 8115, Level 16, State 8, Line 4
Arithmetic overflow error converting numeric to data type numeric.
Bits
An integer data type that can take a value of 1, 0, or NULL.
Very interesting information!Perfect just what I was searching for!