🔢 Decimal to Binary Converter
Professional Decimal to Binary Calculator | Base-10 to Base-2 Conversion Tool
💻 Common Decimal to Binary Conversions
📚 Complete Guide to Decimal to Binary Conversion
Understanding Number Systems
Decimal (Base-10) Number System: The decimal system is humanity's primary counting method, using ten distinct digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Each position in a decimal number represents a power of 10, reading from right to left: ones place \( (10^0 = 1) \), tens place \( (10^1 = 10) \), hundreds place \( (10^2 = 100) \), thousands place \( (10^3 = 1000) \), and so on. Example: The number 345 in decimal means \( 3 \times 10^2 + 4 \times 10^1 + 5 \times 10^0 = 3 \times 100 + 4 \times 10 + 5 \times 1 = 300 + 40 + 5 = 345 \). This positional notation system emerged naturally from human anatomy—we have ten fingers, making base-10 intuitive for counting and arithmetic. Binary (Base-2) Number System: Binary uses only two digits (0 and 1), called "bits" (binary digits). Each position represents a power of 2, reading right to left: \( 2^0 = 1 \), \( 2^1 = 2 \), \( 2^2 = 4 \), \( 2^3 = 8 \), \( 2^4 = 16 \), \( 2^5 = 32 \), \( 2^6 = 64 \), \( 2^7 = 128 \), \( 2^8 = 256 \), continuing exponentially. Example: Binary number 1011 means \( 1 \times 2^3 + 0 \times 2^2 + 1 \times 2^1 + 1 \times 2^0 = 1 \times 8 + 0 \times 4 + 1 \times 2 + 1 \times 1 = 8 + 0 + 2 + 1 = 11 \) in decimal. Binary perfectly matches electronic circuits' two states: off (0) and on (1), voltage low and high, or magnetic polarization north and south. This correspondence makes binary the fundamental language of all digital computers, microprocessors, memory systems, and digital communication. Why computers use binary: Electronic circuits naturally implement binary logic through transistors (switching devices with two stable states: conducting/non-conducting). A typical CPU contains billions of transistors, each representing one bit. Binary mathematics enables Boolean algebra (AND, OR, NOT, XOR gates) forming the foundation of all digital logic circuits, arithmetic units, memory addressing, and data storage. Storage media (hard drives, SSDs, optical discs) encode binary through magnetic orientation, electrical charge, or reflective pits. Network transmission sends binary as voltage pulses (Ethernet), light pulses (fiber optics), or radio waves (WiFi modulation). Every image, video, document, program, and operating system ultimately reduces to binary sequences. Notation conventions: Subscript indicates base: \( 13_{10} \) means "13 in decimal"; \( 1101_2 \) means "1101 in binary." Without subscript, decimal assumed by default. Binary sometimes prefixed "0b" in programming: 0b1010 = 10 decimal. Hexadecimal (base-16) notation uses prefix "0x": 0xFF = 255 decimal = 11111111 binary.
Division-by-2 Conversion Method
Standard Algorithm: Repeated Division by 2. This systematic method converts any positive decimal integer to binary by successively dividing by 2 and recording remainders. The remainders, read in reverse order (bottom to top), form the binary representation. Step-by-step procedure: (1) Divide decimal number by 2. (2) Record the remainder (either 0 or 1)—this becomes one binary digit. (3) Replace the decimal number with the quotient (result of division). (4) Repeat steps 1-3 until quotient equals 0. (5) Read all remainders from bottom to top (last to first)—this sequence is the binary equivalent. Detailed Example 1: Convert 13 to binary. Division 1: \( 13 \div 2 = 6 \) remainder \( 1 \) (13 is odd, so remainder 1). Division 2: \( 6 \div 2 = 3 \) remainder \( 0 \) (6 is even, so remainder 0). Division 3: \( 3 \div 2 = 1 \) remainder \( 1 \) (3 is odd, remainder 1). Division 4: \( 1 \div 2 = 0 \) remainder \( 1 \) (1 is odd, remainder 1; quotient 0 means stop). Reading remainders bottom-to-top: 1, 1, 0, 1 → Binary: \( 1101_2 \). Verification: \( 1 \times 2^3 + 1 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 = 8 + 4 + 0 + 1 = 13_{10} \) ✓. Detailed Example 2: Convert 25 to binary. Step 1: \( 25 \div 2 = 12 \) remainder \( 1 \). Step 2: \( 12 \div 2 = 6 \) remainder \( 0 \). Step 3: \( 6 \div 2 = 3 \) remainder \( 0 \). Step 4: \( 3 \div 2 = 1 \) remainder \( 1 \). Step 5: \( 1 \div 2 = 0 \) remainder \( 1 \). Bottom-to-top: 1, 1, 0, 0, 1 → Binary: \( 11001_2 \). Verify: \( 1 \times 16 + 1 \times 8 + 0 \times 4 + 0 \times 2 + 1 \times 1 = 16 + 8 + 1 = 25_{10} \) ✓. Detailed Example 3: Convert 255 to binary. \( 255 \div 2 = 127 \) r \( 1 \); \( 127 \div 2 = 63 \) r \( 1 \); \( 63 \div 2 = 31 \) r \( 1 \); \( 31 \div 2 = 15 \) r \( 1 \); \( 15 \div 2 = 7 \) r \( 1 \); \( 7 \div 2 = 3 \) r \( 1 \); \( 3 \div 2 = 1 \) r \( 1 \); \( 1 \div 2 = 0 \) r \( 1 \). Eight remainders (all 1s): \( 11111111_2 \) (eight bits). Verify: \( 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255_{10} \) ✓. Note: 255 = maximum value for 8-bit unsigned integer (\( 2^8 - 1 = 256 - 1 = 255 \)). Why this method works mathematically: Any decimal number N can be expressed as \( N = q \times 2 + r \) where q = quotient, r = remainder (0 or 1). The remainder r represents the least significant bit (rightmost bit, \( 2^0 \) position). Repeatedly dividing quotient by 2 extracts successive bits from right to left, building the binary representation one bit at a time. This algorithm is essentially decomposing the decimal number into its binary place values through successive even/odd tests (remainder indicates odd=1 or even=0 at each division stage).
Binary to Decimal Conversion
Positional Notation Method: Multiply and Add. Converting binary to decimal reverses the process by multiplying each binary digit by its positional power of 2, then summing all products. Formula: For binary number \( b_n b_{n-1} \ldots b_2 b_1 b_0 \), decimal value = \( b_n \times 2^n + b_{n-1} \times 2^{n-1} + \ldots + b_2 \times 2^2 + b_1 \times 2^1 + b_0 \times 2^0 \). Example 1: Convert 1010 binary to decimal. Positions (right to left): bit 0 (rightmost), bit 1, bit 2, bit 3 (leftmost). Binary: 1 0 1 0. Position powers: \( 2^3=8, 2^2=4, 2^1=2, 2^0=1 \). Calculation: \( 1 \times 8 + 0 \times 4 + 1 \times 2 + 0 \times 1 = 8 + 0 + 2 + 0 = 10_{10} \). Result: \( 1010_2 = 10_{10} \). Example 2: Convert 11001 binary to decimal. Binary: 1 1 0 0 1 (5 bits). Powers: \( 2^4=16, 2^3=8, 2^2=4, 2^1=2, 2^0=1 \). Calculation: \( 1 \times 16 + 1 \times 8 + 0 \times 4 + 0 \times 2 + 1 \times 1 = 16 + 8 + 0 + 0 + 1 = 25_{10} \). Example 3: Convert 11111111 to decimal. Eight 1-bits: \( 2^7 + 2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0 = 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255_{10} \). Quick pattern: \( n \) consecutive 1-bits = \( 2^n - 1 \) (one less than next power of 2). Eight 1s = \( 2^8 - 1 = 255 \); four 1s = \( 2^4 - 1 = 15 \); ten 1s = \( 2^{10} - 1 = 1023 \). Shortcut for longer binary numbers: Identify which bits are 1 (ignore 0 bits), sum only those corresponding powers of 2. Example: 10100110 has 1s in positions 1, 2, 5, 7 (counting from 0 right-to-left). Sum: \( 2^7 + 2^5 + 2^2 + 2^1 = 128 + 32 + 4 + 2 = 166_{10} \).
Common Decimal to Binary Conversion Table
| Decimal | Binary | Calculation (Powers of 2) | Significance |
|---|---|---|---|
| 0 | 0 | 0 | Zero (minimum value) |
| 1 | 1 | \( 2^0 = 1 \) | One (single bit set) |
| 2 | 10 | \( 2^1 = 2 \) | First power of 2 |
| 3 | 11 | \( 2^1 + 2^0 = 2+1 \) | Two consecutive bits |
| 4 | 100 | \( 2^2 = 4 \) | Second power of 2 |
| 7 | 111 | \( 2^2 + 2^1 + 2^0 = 4+2+1 \) | Three bits set (2³-1) |
| 8 | 1000 | \( 2^3 = 8 \) | Third power of 2 (1 byte = 8 bits) |
| 10 | 1010 | \( 2^3 + 2^1 = 8+2 \) | Common decimal (denary) |
| 15 | 1111 | \( 2^3 + 2^2 + 2^1 + 2^0 = 8+4+2+1 \) | 4-bit max (2⁴-1, one nibble) |
| 16 | 10000 | \( 2^4 = 16 \) | Fourth power of 2 |
| 31 | 11111 | \( 2^4 + 2^3 + 2^2 + 2^1 + 2^0 \) | 5-bit max (2⁵-1) |
| 32 | 100000 | \( 2^5 = 32 \) | Fifth power of 2 |
| 63 | 111111 | Six 1-bits | 6-bit max (2⁶-1) |
| 64 | 1000000 | \( 2^6 = 64 \) | Sixth power (common computer value) |
| 127 | 1111111 | Seven 1-bits | 7-bit max (ASCII character max) |
| 128 | 10000000 | \( 2^7 = 128 \) | Seventh power (signed byte max+1) |
| 255 | 11111111 | Eight 1-bits | 8-bit max (1 byte max, RGB color max) |
| 256 | 100000000 | \( 2^8 = 256 \) | 1 byte = 256 values (0-255) |
| 1024 | 10000000000 | \( 2^{10} = 1024 \) | 1 KB (kilobyte, approximately 1000) |
Practical Applications and Computer Science
Computer Memory and Data Storage: All digital storage uses binary at the fundamental level. Bit (binary digit): Smallest unit, stores single 0 or 1 value. Byte: 8 bits, represents 256 possible values (\( 2^8 = 256 \), range 0-255 or -128 to +127 signed). Example: Letter 'A' in ASCII = 65 decimal = 01000001 binary (one byte). Kilobyte (KB): \( 2^{10} = 1,024 \) bytes (not exactly 1,000 due to binary nature). Megabyte (MB): \( 2^{20} = 1,048,576 \) bytes. Gigabyte (GB): \( 2^{30} = 1,073,741,824 \) bytes. Terabyte (TB): \( 2^{40} \) bytes. Understanding binary explains why storage capacities use these specific values (powers of 2) rather than clean decimal thousands. IP Addresses: IPv4 addresses use four bytes (32 bits total), each byte 0-255 decimal. Example: IP address 192.168.1.1 converts to binary: 192 = 11000000, 168 = 10101000, 1 = 00000001, 1 = 00000001. Full binary: 11000000.10101000.00000001.00000001 (32 bits allows \( 2^{32} = 4,294,967,296 \) possible addresses). Subnet masks use binary to divide networks: 255.255.255.0 = 11111111.11111111.11111111.00000000. Permissions and Flags: Unix/Linux file permissions use 3-bit groups (read=4, write=2, execute=1 in decimal, 100, 010, 001 in binary). Permission 755 = 111 101 101 binary (owner: rwx; group: r-x; others: r-x). Boolean flags combine using bitwise operations: Flag1=0001, Flag2=0010, Flag3=0100, Flag4=1000; combined 0111 indicates flags 1,2,3 enabled. Color Representation: RGB color model uses three bytes (24 bits): 8 bits red (0-255), 8 bits green (0-255), 8 bits blue (0-255). White = 255,255,255 = 11111111,11111111,11111111 (all colors maximum). Black = 0,0,0 = 00000000,00000000,00000000 (no colors). Pure red = 255,0,0 = 11111111,00000000,00000000. Hexadecimal shorthand: #FFFFFF = white (FF=255 in hex); #FF0000 = red. Binary Arithmetic in CPUs: Addition: 1+0=1, 1+1=10 (0 with carry 1), 10+1=11, 11+1=100 (carries propagate). Example: 1011 + 0110 = 10001 (11+6=17 in decimal). Subtraction uses two's complement. Multiplication by 2 = shift left one position (append 0): 101 (5) << 1 = 1010 (10). Division by 2 = shift right one position: 1010 (10) >> 1 = 101 (5). Programming and Data Types: Integer ranges determined by bit count. 8-bit unsigned: 0 to 255 (\( 2^8-1 \)). 8-bit signed: -128 to +127 (one bit for sign). 16-bit unsigned: 0 to 65,535 (\( 2^{16}-1 \)). 32-bit unsigned: 0 to 4,294,967,295 (\( 2^{32}-1 \), typical int). 64-bit unsigned: 0 to 18,446,744,073,709,551,615 (\( 2^{64}-1 \), long long). Overflow occurs when result exceeds maximum: 255 + 1 = 0 in 8-bit arithmetic (wraps around). Network Protocols: Binary encoding in packet headers. TCP flags (8 bits): CWR, ECE, URG, ACK, PSH, RST, SYN, FIN. SYN packet has bit 1 set (00000010). Checksum validation uses binary XOR operations. MAC addresses: 48 bits (six bytes) like 00:1A:2B:3C:4D:5E.
Why Choose RevisionTown's Decimal to Binary Converter?
RevisionTown's professional converter provides: (1) Bidirectional Conversion—Instantly convert decimal↔binary with accurate algorithms; (2) Step-by-Step Division Method—Shows complete division-by-2 process for educational understanding; (3) Bulk Processing—Convert multiple values simultaneously for programming, networking, and data analysis; (4) Large Number Support—Handles up to 9,999,999 decimal (converts to 24-bit binary) for practical applications; (5) Formula Display—Demonstrates binary-to-decimal calculation with power-of-2 expansion; (6) Comprehensive Reference—Quick lookup table common values 0-255 (8-bit range); (7) Mobile Optimized—Responsive design works perfectly on smartphones, tablets, and desktops; (8) Zero Cost—Completely free with no ads, registration, or usage limitations; (9) Professional Accuracy—Trusted by computer science students, programmers, network engineers, digital electronics students, and IT professionals worldwide for homework assignments (converting decimal 13 to binary 1101 with division steps shown), programming (converting decimal IP address octets 192.168.1.1 to binary subnet calculations), network configuration (subnet mask 255.255.255.0 = 11111111.11111111.11111111.00000000 for CIDR notation /24), digital circuit design (converting decimal control values to binary for logic gates, multiplexers, memory addressing), computer architecture (understanding 8-bit, 16-bit, 32-bit, 64-bit integer representations), data encoding (RGB color decimal 255,128,64 to binary 11111111,10000000,01000000 for image processing), permissions (Linux chmod 755 = 111,101,101 understanding file access bits), embedded systems (converting sensor values, ADC readings, register values between decimal and binary), educational purposes (learning number systems, positional notation, base conversion algorithms fundamental computer science), and all applications requiring accurate decimal-binary conversions with clear educational explanations for professional computer science, software development, network engineering, digital electronics, and comprehensive computational mathematics worldwide.
❓ Frequently Asked Questions
10 (decimal) = 1010 (binary). Division method: Step 1: 10 ÷ 2 = 5 remainder 0. Step 2: 5 ÷ 2 = 2 remainder 1. Step 3: 2 ÷ 2 = 1 remainder 0. Step 4: 1 ÷ 2 = 0 remainder 1. Read remainders bottom-to-top: 1, 0, 1, 0 → Binary: 1010. Verification: \( 1 \times 2^3 + 0 \times 2^2 + 1 \times 2^1 + 0 \times 2^0 = 8 + 0 + 2 + 0 = 10 \) ✓. Meaning: Position 3 (8) and position 1 (2) are set, summing to 10. Four bits represent decimal 0-15. Memory aid: 1010 pattern (alternating bits) represents 10 decimal—common value in computing.
Division-by-2 method: (1) Divide decimal by 2. (2) Write remainder (0 or 1). (3) Replace number with quotient. (4) Repeat until quotient = 0. (5) Read remainders bottom-to-top = binary. Example: Convert 25 to binary. 25÷2=12 r1; 12÷2=6 r0; 6÷2=3 r0; 3÷2=1 r1; 1÷2=0 r1. Remainders up: 1,1,0,0,1 → 11001. Verify: 16+8+1=25 ✓. Alternative method (powers of 2): Find largest power of 2 ≤ number, subtract, repeat. 25: largest is 16 (2⁴), leaves 9; largest is 8 (2³), leaves 1; largest is 1 (2⁰). Positions 4,3,0 set → 11001. Both methods produce same result; division method more systematic for larger numbers.
Binary = base-2 number system using only two digits: 0 and 1. Each position represents power of 2 (reading right-to-left): \( 2^0=1, 2^1=2, 2^2=4, 2^3=8, 2^4=16, 2^5=32, 2^6=64, 2^7=128, 2^8=256 \), etc. Example: 1011 (binary) = \( 1×8 + 0×4 + 1×2 + 1×1 = 8+2+1 = 11 \) (decimal). Why computers use binary: Electronic circuits have two states: on/off, high/low voltage, charged/uncharged. Binary perfectly matches this two-state logic. Transistors (billions in CPUs) switch between two states representing 0 and 1. All digital data—numbers, text, images, videos, programs—ultimately stored as binary sequences (bits). Terminology: Bit = binary digit (0 or 1). Byte = 8 bits (can represent 0-255 decimal). Nibble = 4 bits (half byte). Binary used throughout computing: memory addresses, CPU instructions, network protocols, file encoding, data transmission.
Multiply each binary digit by its positional power of 2, then sum. Formula: For binary \( b_3 b_2 b_1 b_0 \), decimal = \( b_3×2^3 + b_2×2^2 + b_1×2^1 + b_0×2^0 \). Example: Convert 1101 binary to decimal. Positions (right-to-left): 0,1,2,3. Binary: 1 1 0 1. Calculation: \( 1×2^3 + 1×2^2 + 0×2^1 + 1×2^0 = 1×8 + 1×4 + 0×2 + 1×1 = 8+4+0+1 = 13 \). Result: 1101₂ = 13₁₀. Shortcut: Only calculate positions with 1-bits (ignore 0-bits). Example: 10101 has 1s at positions 0,2,4. Sum: \( 2^4+2^2+2^0 = 16+4+1 = 21 \). Quick reference: Memorize powers of 2 up to \( 2^{10}=1024 \) for faster mental conversion. Practice: 1111 = 8+4+2+1 = 15; 10000 = 16; 11111111 = 255 (eight 1s = \( 2^8-1 \)).
255 (decimal) = 11111111 (binary)—eight consecutive 1-bits. Calculation: 255÷2=127 r1; 127÷2=63 r1; 63÷2=31 r1; 31÷2=15 r1; 15÷2=7 r1; 7÷2=3 r1; 3÷2=1 r1; 1÷2=0 r1. Eight remainders all 1s: 11111111. Verification: \( 2^7+2^6+2^5+2^4+2^3+2^2+2^1+2^0 = 128+64+32+16+8+4+2+1 = 255 \). Significance: 255 = maximum value for 8-bit unsigned integer (one byte). Formula: \( n \) bits max = \( 2^n-1 \); 8 bits max = \( 2^8-1 = 256-1 = 255 \). Range: 8-bit = 0 to 255 (256 total values). Applications: RGB color maximum (255,255,255 = white); IP address byte (each octet 0-255 like 192.168.1.255 broadcast); ASCII extended character set (0-255 characters); byte value limit in programming. Pattern: All 1-bits = maximum for that bit-width (4 bits max=15=1111; 16 bits max=65535=1111111111111111).
1 byte = 8 bits (fundamental unit in computing). Bit: Single binary digit (0 or 1), smallest unit of data. Byte: 8 bits grouped together, can represent 256 different values (\( 2^8 = 256 \)). Range: 0 to 255 (unsigned) or -128 to +127 (signed). Why 8 bits? Historical: Early computers standardized on 8-bit bytes for character encoding (ASCII uses 7 bits, extended ASCII uses 8 bits = 256 characters including letters, numbers, symbols). Modern standards: ISO/IEC 80000-13 defines byte = 8 bits (octet in networking terminology). Related units: Nibble = 4 bits (half byte, represents 0-15 = one hexadecimal digit). Word = typically 16, 32, or 64 bits depending on CPU architecture (2, 4, or 8 bytes). Examples: Character 'A' = 65 decimal = 01000001 binary = 1 byte. IP address octet = 1 byte (192 = 11000000). RGB color channel = 1 byte per color (3 bytes total for 24-bit color). File size: 1 KB = 1,024 bytes = 8,192 bits.
Binary perfectly matches electronic circuits' two-state nature: (1) Physical implementation: Electronic components (transistors, logic gates) naturally have two stable states: on/off, high/low voltage (typically 0V=0, 3.3V or 5V=1), conducting/non-conducting, charged/uncharged capacitor. Creating reliable three-state or ten-state systems exponentially more complex and error-prone due to voltage noise, temperature variation, manufacturing tolerances. (2) Reliability: Two-state systems highly robust against electronic noise (voltage fluctuation won't flip bit unless exceeds threshold between 0 and 1). Ten-state decimal would require distinguishing ten precise voltage levels—extremely sensitive to interference, requiring expensive shielding and precise components. (3) Boolean algebra: Binary enables elegant mathematical framework (Boolean logic: AND, OR, NOT, XOR, NAND, NOR gates) forming basis of all digital circuits, arithmetic logic units (ALU), CPU design, and programming logic. (4) Storage media: Magnetic storage (hard drives) uses north/south pole orientation = binary. Optical discs (CD/DVD) use reflective pits = binary. Flash memory (SSD) uses trapped electrical charge = binary. Emerging technologies (quantum computing) use different principles but still reduce to binary operations at classical interface. (5) Historical: Early electronic computers (1940s ENIAC, UNIVAC) experimentally tried decimal using vacuum tubes but binary proved far more reliable, faster, and cheaper. Modern CPUs contain billions of transistors—only feasible using simple two-state binary logic. Human interface: We use decimal (base-10) for input/output; computers convert to binary internally for processing, convert back to decimal for display. Programmers use hexadecimal (base-16) as convenient shorthand for binary (each hex digit = 4 binary bits: F₁₆ = 1111₂).
16 (decimal) = 10000 (binary)—fifth power of 2. Conversion: 16÷2=8 r0; 8÷2=4 r0; 4÷2=2 r0; 2÷2=1 r0; 1÷2=0 r1. Remainders up: 1,0,0,0,0 → 10000. Verification: \( 1×2^4 + 0×2^3 + 0×2^2 + 0×2^1 + 0×2^0 = 16+0+0+0+0 = 16 \) ✓. Significance: 16 = \( 2^4 \) (exact power of 2). Pattern: Powers of 2 in binary always single 1 followed by zeros: 2=10, 4=100, 8=1000, 16=10000, 32=100000, 64=1000000, 128=10000000, 256=100000000. Number of zeros = exponent. Computing relevance: 16-bit integer range (0-65535 unsigned, -32768 to +32767 signed). Hexadecimal base-16 uses 16 symbols (0-9, A-F); one hex digit = 4 binary bits. 16 KB memory = 16,384 bytes (\( 16×2^{10}=16×1024 \)). Nibble (4 bits) has 16 possible values (0-15 decimal = 0-F hex = 0000-1111 binary). Port numbers, character encoding, memory alignment often use powers of 16.






