Comprehensive Discrete Mathematics Notes
1. Introduction to Discrete Mathematics
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. Unlike calculus, which deals with continuous change, discrete mathematics deals with distinct, separate values.
Key Areas of Discrete Mathematics:
- Set Theory
- Logic
- Number Theory
- Combinatorics
- Graph Theory
- Recurrence Relations
- Algorithms
Discrete mathematics forms the foundation for computer science and is essential for understanding programming, data structures, algorithm design, and cryptography.
2. Set Theory
Set theory is the mathematical theory of collections of objects called sets. It forms the foundation for most of modern mathematics.
2.1 Basic Definitions
A set is a well-defined collection of distinct objects. The objects in a set are called elements or members of the set.
Set Notation:
- We denote sets with capital letters (A, B, C, etc.)
- Elements are listed between curly braces: A = {1, 2, 3}
- x ∈ A means "x is an element of set A"
- x ∉ A means "x is not an element of set A"
- |A| denotes the cardinality (size) of set A
Example 1: Let A = {1, 2, 3, 4, 5} and B = {2, 4, 6, 8}.
- 3 ∈ A but 3 ∉ B
- |A| = 5 and |B| = 4
- The elements common to both sets are {2, 4}
2.2 Types of Sets
- Empty Set (∅): A set with no elements
- Singleton Set: A set with exactly one element
- Finite Set: A set with a finite number of elements
- Infinite Set: A set with an infinite number of elements
- Universal Set (U): Set containing all elements under consideration
2.3 Set Operations
Let A and B be sets. Then:
- Union: A ∪ B = {x | x ∈ A or x ∈ B}
- Intersection: A ∩ B = {x | x ∈ A and x ∈ B}
- Difference: A - B = {x | x ∈ A and x ∉ B}
- Complement: A' = {x ∈ U | x ∉ A}
- Symmetric Difference: A △ B = (A - B) ∪ (B - A)
Example 2: Let A = {1, 2, 3, 4, 5} and B = {4, 5, 6, 7}. Find:
- A ∪ B = {1, 2, 3, 4, 5, 6, 7}
- A ∩ B = {4, 5}
- A - B = {1, 2, 3}
- B - A = {6, 7}
- A △ B = {1, 2, 3, 6, 7}
2.4 Set Identities
Let A, B, and C be sets. Then:
- Commutative Laws: A ∪ B = B ∪ A and A ∩ B = B ∩ A
- Associative Laws: (A ∪ B) ∪ C = A ∪ (B ∪ C) and (A ∩ B) ∩ C = A ∩ (B ∩ C)
- Distributive Laws: A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) and A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
- De Morgan's Laws: (A ∪ B)' = A' ∩ B' and (A ∩ B)' = A' ∪ B'
Proof of Commutative Law for Union:
To prove A ∪ B = B ∪ A, we need to show that every element in A ∪ B is also in B ∪ A, and vice versa.
Let x be an arbitrary element:
x ∈ A ∪ B ⟺ x ∈ A or x ∈ B ⟺ x ∈ B or x ∈ A ⟺ x ∈ B ∪ A
Therefore, A ∪ B = B ∪ A.
2.5 Power Sets and Cartesian Products
- Power Set: P(A) is the set of all subsets of A
- Cartesian Product: A × B = {(a, b) | a ∈ A and b ∈ B}
Example 3: Let A = {1, 2, 3}. Find P(A).
P(A) = {∅, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}
Note that |P(A)| = 2|A| = 23 = 8
Example 4: Let A = {a, b} and B = {1, 2, 3}. Find A × B.
A × B = {(a, 1), (a, 2), (a, 3), (b, 1), (b, 2), (b, 3)}
Note that |A × B| = |A| × |B| = 2 × 3 = 6
3. Logic and Proof Techniques
Logic is the study of principles of valid reasoning and inference. It provides the foundation for understanding mathematical proofs.
3.1 Propositional Logic
A proposition is a declarative statement that is either true or false, but not both.
Logical Operators:
- Negation (¬p): "not p"
- Conjunction (p ∧ q): "p and q"
- Disjunction (p ∨ q): "p or q"
- Implication (p → q): "if p, then q"
- Biconditional (p ↔ q): "p if and only if q"
3.2 Truth Tables
Truth tables show the logical outcome of statements based on the truth values of their components.
p | q | p ∧ q | p ∨ q | p → q | p ↔ q |
---|---|---|---|---|---|
T | T | T | T | T | T |
T | F | F | T | F | F |
F | T | F | T | T | F |
F | F | F | F | T | T |
Example 5: Create a truth table for (p → q) ∧ (¬q → ¬p)
p | q | ¬p | ¬q | p → q | ¬q → ¬p | (p → q) ∧ (¬q → ¬p) |
---|---|---|---|---|---|---|
T | T | F | F | T | T | T |
T | F | F | T | F | F | F |
F | T | T | F | T | T | T |
F | F | T | T | T | T | T |
The compound statement is actually a tautology (always true) since p → q is logically equivalent to ¬q → ¬p (contrapositive).
3.3 Logical Equivalences
Important Logical Equivalences:
- Double Negation: ¬(¬p) ≡ p
- Commutative Laws: p ∧ q ≡ q ∧ p and p ∨ q ≡ q ∨ p
- Associative Laws: (p ∧ q) ∧ r ≡ p ∧ (q ∧ r) and (p ∨ q) ∨ r ≡ p ∨ (q ∨ r)
- Distributive Laws: p ∧ (q ∨ r) ≡ (p ∧ q) ∨ (p ∧ r) and p ∨ (q ∧ r) ≡ (p ∨ q) ∧ (p ∨ r)
- De Morgan's Laws: ¬(p ∧ q) ≡ ¬p ∨ ¬q and ¬(p ∨ q) ≡ ¬p ∧ ¬q
- Implication: p → q ≡ ¬p ∨ q
- Contrapositive: p → q ≡ ¬q → ¬p
3.4 Predicates and Quantifiers
A predicate is a statement that contains variables and becomes a proposition when values are assigned to the variables.
Quantifiers:
- Universal Quantifier (∀x): "For all x"
- Existential Quantifier (∃x): "There exists an x"
Example 6: Let P(x) be "x > 3" where the domain is the set of real numbers.
- ∀x P(x) means "For all real numbers x, x > 3" (This is false)
- ∃x P(x) means "There exists a real number x such that x > 3" (This is true)
3.5 Proof Techniques
- Direct Proof: Assume the hypothesis and directly derive the conclusion
- Proof by Contraposition: Prove ¬q → ¬p instead of p → q
- Proof by Contradiction: Assume the statement is false and derive a contradiction
- Mathematical Induction: Prove for base case and then show that if true for k, it's true for k+1
Example 7: Prove by induction that 1 + 2 + 3 + ... + n = n(n+1)/2 for all positive integers n.
Base Case (n = 1): Left side: 1. Right side: 1(1+1)/2 = 2/2 = 1. So it's true for n = 1.
Inductive Hypothesis: Assume the formula is true for n = k. That is, 1 + 2 + ... + k = k(k+1)/2.
Inductive Step: We need to prove the formula for n = k+1.
1 + 2 + ... + k + (k+1)
= [1 + 2 + ... + k] + (k+1)
= k(k+1)/2 + (k+1) (by the inductive hypothesis)
= (k+1)[k/2 + 1]
= (k+1)[k/2 + 2/2]
= (k+1)[(k+2)/2]
= (k+1)(k+2)/2
This is exactly the formula for n = k+1, which completes the proof.
4. Number Theory
Number theory is the study of integers and their properties, particularly focusing on divisibility, prime numbers, and modular arithmetic.
4.1 Divisibility and Division Algorithm
An integer a divides another integer b (denoted a | b) if there exists an integer c such that b = ac.
Division Algorithm: For any integers a and b with b > 0, there exist unique integers q and r such that a = bq + r, where 0 ≤ r < b.
- q is called the quotient
- r is called the remainder
Example 8: Apply the division algorithm to find q and r for 47 ÷ 7.
47 = 7 × 6 + 5
So q = 6 and r = 5.
4.2 Greatest Common Divisor (GCD) and Least Common Multiple (LCM)
The GCD of two integers a and b is the largest integer that divides both a and b.
The LCM of two integers a and b is the smallest positive integer that is divisible by both a and b.
For any positive integers a and b:
lcm(a, b) × gcd(a, b) = a × b
4.3 Euclidean Algorithm
The Euclidean Algorithm is an efficient method for computing the GCD of two integers.
Example 9: Find gcd(48, 18) using the Euclidean Algorithm.
48 = 18 × 2 + 12
18 = 12 × 1 + 6
12 = 6 × 2 + 0
The last non-zero remainder is 6, so gcd(48, 18) = 6.
4.4 Prime Numbers and Fundamental Theorem of Arithmetic
A prime number is a positive integer greater than 1 that has no positive integer divisors other than 1 and itself.
Fundamental Theorem of Arithmetic: Every integer greater than 1 can be uniquely represented as a product of primes.
Example 10: Find the prime factorization of 84.
84 = 2 × 42 = 2 × 2 × 21 = 22 × 3 × 7
4.5 Modular Arithmetic
Modular arithmetic is arithmetic for integers where values "wrap around" after reaching a certain value (the modulus).
We say a is congruent to b modulo m (written a ≡ b (mod m)) if m divides (a - b).
Example 11: Compute 17 × 13 (mod 5).
17 ≡ 2 (mod 5) (since 17 = 3 × 5 + 2)
13 ≡ 3 (mod 5) (since 13 = 2 × 5 + 3)
17 × 13 ≡ 2 × 3 ≡ 6 ≡ 1 (mod 5) (since 6 = 1 × 5 + 1)
5. Combinatorics
Combinatorics is the study of counting, arrangement, and combination of objects.
5.1 Counting Principles
Addition Principle: If event A can occur in m ways and event B can occur in n ways, and the events cannot occur simultaneously, then event "A or B" can occur in m + n ways.
Multiplication Principle: If event A can occur in m ways and, after A has occurred, event B can occur in n ways, then the sequence "A followed by B" can occur in m × n ways.
Example 12: A restaurant offers 5 appetizers, 8 main courses, and 4 desserts. How many different 3-course meals can be created?
By the multiplication principle: 5 × 8 × 4 = 160 different meals.
5.2 Permutations
A permutation is an ordered arrangement of objects. The number of permutations of n distinct objects is:
P(n, r) = n! / (n - r)!
where n! (n factorial) = n × (n-1) × (n-2) × ... × 2 × 1
Example 13: In how many ways can 5 runners finish a race (assuming no ties)?
This is P(5, 5) = 5! = 5 × 4 × 3 × 2 × 1 = 120 different ways.
5.3 Combinations
A combination is a selection of objects without regard to order. The number of ways to select r objects from a set of n distinct objects is:
C(n, r) = n! / (r! × (n - r)!)
Also written as (n choose r) or nCr.
Example 14: How many different 5-card hands can be dealt from a standard 52-card deck?
C(52, 5) = 52! / (5! × 47!) = 2,598,960 different hands.
5.4 Binomial Theorem
Binomial Theorem: For any positive integer n,
(x + y)n = Σk=0n C(n, k) xn-k yk
= C(n, 0)xn + C(n, 1)xn-1y + C(n, 2)xn-2y2 + ... + C(n, n)yn
Example 15: Expand (x + 2)4 using the Binomial Theorem.
(x + 2)4 = C(4, 0)x4 + C(4, 1)x3(2) + C(4, 2)x2(2)2 + C(4, 3)x(2)3 + C(4, 4)(2)4
= 1x4 + 4x3(2) + 6x2(4) + 4x(8) + 1(16)
= x4 + 8x3 + 24x2 + 32x + 16
5.5 Pigeonhole Principle
Pigeonhole Principle: If n pigeons are placed into m pigeonholes and n > m, then at least one pigeonhole must contain more than one pigeon.
Example 16: Show that in a group of 13 people, at least 2 people must have been born in the same month.
We have 13 people (pigeons) and 12 months (pigeonholes). Since 13 > 12, by the pigeonhole principle, at least 2 people must share a birth month.
6. Graph Theory
Graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects.
6.1 Basic Definitions
A graph G = (V, E) consists of a set V of vertices and a set E of edges, where each edge connects a pair of vertices.
Types of Graphs:
- Simple Graph: No self-loops or multiple edges
- Directed Graph (Digraph): Edges have direction
- Weighted Graph: Edges have weights or costs
- Complete Graph: Every pair of vertices is connected
- Bipartite Graph: Vertices can be divided into two disjoint sets such that no edge connects vertices in the same set
6.2 Graph Representations
Graphs can be represented using adjacency matrices, adjacency lists, or edge lists.
Example 17: Consider the following graph with V = {A, B, C, D} and E = {(A,B), (A,C), (B,C), (B,D), (C,D)}. Its adjacency matrix is:
A | B | C | D | |
---|---|---|---|---|
A | 0 | 1 | 1 | 0 |
B | 1 | 0 | 1 | 1 |
C | 1 | 1 | 0 | 1 |
D | 0 | 1 | 1 | 0 |
6.3 Graph Traversals
There are two main ways to traverse a graph:
- Breadth-First Search (BFS): Explores all neighbors at the current depth before moving to nodes at the next depth level
- Depth-First Search (DFS): Explores as far as possible along each branch before backtracking
Example 18: Consider the graph from Example 17. Starting at vertex A:
BFS traversal: A, B, C, D
DFS traversal: A, B, C, D or A, B, D, C or A, C, B, D or A, C, D, B (depending on the order of visiting neighbors)
6.4 Euler and Hamilton Paths/Circuits
An Euler path is a path that visits every edge exactly once. If it starts and ends at the same vertex, it's an Euler circuit.
A Hamilton path is a path that visits every vertex exactly once. If it starts and ends at the same vertex, it's a Hamilton circuit.
Euler's Theorem: A connected graph has an Euler circuit if and only if every vertex has an even degree.
6.5 Trees
A tree is a connected, acyclic graph. A forest is a disjoint union of trees.
A tree with n vertices has exactly n-1 edges.
6.6 Planar Graphs
A graph is planar if it can be drawn in the plane without crossing edges.
Euler's Formula for Planar Graphs: If a connected planar graph has v vertices, e edges, and f faces, then v - e + f = 2.
7. Recurrence Relations
A recurrence relation is an equation that defines a sequence based on previous terms in the sequence.
7.1 Linear Recurrence Relations
A linear recurrence relation of order k is of the form:
an = c1an-1 + c2an-2 + ... + ckan-k + f(n)
where c1, c2, ..., ck are constants.
7.2 Solving Linear Homogeneous Recurrence Relations
For a homogeneous linear recurrence relation (where f(n) = 0), we form the characteristic equation:
rk - c1rk-1 - c2rk-2 - ... - ck = 0
Example 19: Solve the recurrence relation an = 5an-1 - 6an-2 with initial conditions a0 = 1 and a1 = 4.
Step 1: Form the characteristic equation: r2 - 5r + 6 = 0
Step 2: Factor: (r - 2)(r - 3) = 0, so r = 2 or r = 3
Step 3: The general solution is an = c12n + c23n
Step 4: Use initial conditions to find c1 and c2:
a0 = 1 = c120 + c230 = c1 + c2
a1 = 4 = c121 + c231 = 2c1 + 3c2
Solving, we get c1 = -2 and c2 = 3
Step 5: Final solution: an = -2 × 2n + 3 × 3n
7.3 Fibonacci Numbers
The Fibonacci sequence is defined by the recurrence relation:
F0 = 0, F1 = 1, Fn = Fn-1 + Fn-2 for n ≥ 2
Example 20: Find a closed form for the Fibonacci sequence.
Step 1: The characteristic equation is r2 - r - 1 = 0
Step 2: Using the quadratic formula, the roots are:
r1 = (1 + √5)/2 ≈ 1.618 (the golden ratio, often denoted by φ)
r2 = (1 - √5)/2 ≈ -0.618
Step 3: The general solution is Fn = c1r1n + c2r2n
Step 4: Using the initial conditions:
F0 = 0 = c1 + c2
F1 = 1 = c1r1 + c2r2
Solving, we get c1 = 1/√5 and c2 = -1/√5
Step 5: Final solution: Fn = (1/√5)[(1 + √5)/2]n - (1/√5)[(1 - √5)/2]n
This is known as Binet's formula.
8. Algorithms
An algorithm is a step-by-step procedure for solving a problem or accomplishing a task.
8.1 Algorithm Analysis
The efficiency of an algorithm is typically measured in terms of its time complexity and space complexity, expressed using Big O notation.
Common Time Complexities:
- O(1): Constant time
- O(log n): Logarithmic time
- O(n): Linear time
- O(n log n): Linearithmic time
- O(n2): Quadratic time
- O(2n): Exponential time
8.2 Sorting Algorithms
Sorting algorithms arrange elements in a specific order (usually ascending or descending).
Example 21: Here's the pseudocode for the Insertion Sort algorithm:
InsertionSort(A): for i = 1 to A.length - 1 key = A[i] j = i - 1 while j ≥ 0 and A[j] > key A[j+1] = A[j] j = j - 1 A[j+1] = key
Time Complexity: O(n2) in the worst case.
Sorting Algorithm | Best Case | Average Case | Worst Case | Space Complexity |
---|---|---|---|---|
Bubble Sort | O(n) | O(n2) | O(n2) | O(1) |
Selection Sort | O(n2) | O(n2) | O(n2) | O(1) |
Insertion Sort | O(n) | O(n2) | O(n2) | O(1) |
Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
Quick Sort | O(n log n) | O(n log n) | O(n2) | O(log n) |
Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) |
8.3 Searching Algorithms
Searching algorithms find the position of a target value within a data structure.
Example 22: Binary Search algorithm (for sorted arrays):
BinarySearch(A, target): low = 0 high = A.length - 1 while low ≤ high mid = (low + high) / 2 if A[mid] == target return mid else if A[mid] < target low = mid + 1 else high = mid - 1 return -1 // Not found
Time Complexity: O(log n)
8.4 Graph Algorithms
Graph algorithms solve problems related to graph structures.
Example 23: Dijkstra's Algorithm for finding the shortest path from a source vertex to all other vertices:
Dijkstra(G, source): Initialize dist[source] = 0 and dist[v] = ∞ for all other vertices Initialize priority queue Q with all vertices while Q is not empty u = vertex in Q with minimum dist[u] Remove u from Q for each neighbor v of u alt = dist[u] + weight(u, v) if alt < dist[v] dist[v] = alt Update v's position in Q return dist
Time Complexity: O(|E| + |V| log |V|) with a binary heap implementation.
8.5 Greedy Algorithms
Greedy algorithms make the locally optimal choice at each step in the hope of finding a global optimum.
Example 24: Kruskal's algorithm for finding the Minimum Spanning Tree:
Kruskal(G): Initialize a forest T with each vertex as a separate tree Sort all edges of G in non-decreasing order of weight for each edge (u, v) in the sorted order if adding (u, v) does not create a cycle in T Add edge (u, v) to T return T
Time Complexity: O(|E| log |E|) or O(|E| log |V|).
8.6 Dynamic Programming
Dynamic programming solves complex problems by breaking them down into simpler subproblems and storing the results to avoid redundant computations.
Example 25: The Fibonacci sequence using dynamic programming:
Fibonacci(n): Initialize F[0] = 0 and F[1] = 1 for i = 2 to n F[i] = F[i-1] + F[i-2] return F[n]
Time Complexity: O(n) (compared to O(2n) for a naive recursive approach).
9. Master Quiz
Test your knowledge of discrete mathematics with this comprehensive quiz covering all topics.
Quiz Complete!
Your Final Score: 0/10