Skip to content
Open

Hld #22

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
123 changes: 123 additions & 0 deletions blogs/block_id9.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
---

# Heavy-Light Decomposition (HLD)

### Difficulty: Very Hard
**Topic**: Advanced Tree Data Structures
**Time Complexity**: \( O(\log^2 N) \) for queries
**Space Complexity**: \( O(N) \)

---


Heavy-Light Decomposition (HLD) is an advanced technique used in trees to break down a path into "heavy" and "light" edges. This method is especially powerful in solving **range queries** (e.g., sum, min, max) and **path updates** between nodes in trees. HLD is mainly used in competitive programming and advanced data structure problems.

---

### Algorithm

1. **Tree Traversal**: Calculate the subtree size for each node in a Depth-First Search (DFS).
2. **Heavy Edge Selection**: For each node, select the child with the largest subtree size as the "heavy" child.
3. **Heavy-Light Paths**:
- Assign each node to either a heavy path or a light path based on its position relative to the "heavy" child.
- Label paths so that each node is part of a continuous segment in an array.
4. **Segment Tree**: Use a Segment Tree or Fenwick Tree to handle range queries and updates on the flattened tree array.

---

### Input and Output

- **Input**:
- A tree with nodes and edges.
- Queries involving range queries (e.g., sum, min) or point updates between nodes.
- **Output**:
- The result of each query, based on the current tree structure.

---

### Example

#### Input

A tree structured as follows:
1
/ \
2 3
/ \ \
4 5 6
- Queries: Find the sum between nodes 4 and 5, or update the value at node 3.

#### Output

Results for the specified queries, such as path sums, min, or max values.

---

### Solution Outline

Here is an outline of the solution using HLD:

```python
class HLD:
def __init__(self, n):
self.n = n
self.tree = [[] for _ in range(n)]
self.size = [0] * n
self.parent = [-1] * n
self.depth = [0] * n
self.chain_head = [-1] * n
self.pos_in_base = [-1] * n
self.curr_pos = 0

def dfs_size(self, u):
"""Calculate subtree sizes and parents using DFS."""
self.size[u] = 1
for v in self.tree[u]:
if v != self.parent[u]:
self.parent[v] = u
self.depth[v] = self.depth[u] + 1
self.size[u] += self.dfs_size(v)
return self.size[u]

def decompose(self, u, head):
"""Decompose tree into heavy-light paths."""
self.chain_head[u] = head
self.pos_in_base[u] = self.curr_pos
self.curr_pos += 1
heavy_child, max_size = -1, 0
for v in self.tree[u]:
if v != self.parent[u] and self.size[v] > max_size:
heavy_child, max_size = v, self.size[v]
if heavy_child != -1:
self.decompose(heavy_child, head)
for v in self.tree[u]:
if v != self.parent[u] and v != heavy_child:
self.decompose(v, v)
## Explanation

- **Tree Size Calculation**: A Depth-First Search (DFS) is used to calculate subtree sizes.
- **Heavy Path Assignment**: Each node’s child with the largest subtree is designated as “heavy.”
- **Path Decomposition**: Nodes are grouped into heavy paths, minimizing the number of paths a query crosses.
- **Flattened Representation**: The tree is mapped into an array where each path segment is continuous, enabling efficient range queries with a Segment Tree.


## Pros and Cons

- **Pros**:
- Efficient path-based operations in trees.
- Reduces complex tree queries to simpler range queries on an array.
- **Cons**:
- Difficult to implement and understand.
- Limited to trees and paths, less flexible for general graph types.

## Use Cases

- **Path Queries**: Ideal for answering range queries along paths in trees (e.g., sum, min, max).
- **Point Updates**: Useful for updating values at nodes within a tree and quickly reflecting them on paths.
- **Lowest Common Ancestor (LCA)**: Can work alongside HLD for efficient LCA computation in tree structures.

## Additional Challenges

1. **Dynamic Edge Updates**: Adapt HLD to work with dynamic edge weights.
2. **Path XOR Queries**: Modify the algorithm to handle XOR queries along a path.
3. **Advanced Tree Queries**: Solve complex tree-based range queries by combining HLD with other techniques.
39 changes: 39 additions & 0 deletions blogs/blog_id_7.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Understanding Binary Search in Python

## Introduction
Binary Search is an efficient algorithm for finding an element in a sorted list. The binary search algorithm works by repeatedly dividing the search interval in half, which makes it faster than linear search, especially for large datasets.

## How Binary Search Works
In binary search, there are two main components:
- **Search Interval**: The range within which the search is conducted. Initially, the interval covers the entire array.
- **Middle Element Comparison**: The middle element of the interval is compared with the target value. If they match, the search is successful. If the target value is smaller, the search continues in the left half; if it's larger, the search continues in the right half.

### Key Points
- Binary search works only on sorted arrays.
- It has a time complexity of **O(log n)**, making it highly efficient.

## Example
Here’s a simple example of a binary search function that returns the index of a target value if it exists in a sorted array, or -1 if it doesn’t:

```python
def binary_search(arr, target):
left, right = 0, len(arr) - 1 # Set initial search bounds

while left <= right:
mid = (left + right) // 2 # Find the middle index

if arr[mid] == target: # Target found
return mid
elif arr[mid] < target: # Move to the right half
left = mid + 1
else: # Move to the left half
right = mid - 1

return -1 # Target not found

# Example usage
array = [1, 3, 5, 7, 9, 11]
target = 7
result = binary_search(array, target)
print("Target found at index:", result) # Output: Target found at index: 3
```
120 changes: 120 additions & 0 deletions blogs/blog_id_8.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
---

# Heap Sort Algorithm

### Difficulty: Medium
**Topic**: Sorting Algorithms
**Time Complexity**: O(n log n)
**Space Complexity**: O(1)

---

Heap Sort is a comparison-based sorting technique that uses a binary heap data structure. It builds a max heap (or min heap) and repeatedly extracts the maximum (or minimum) element to create a sorted array. The algorithm is efficient and performs well for large datasets.

---

### Algorithm

1. **Build the Heap**:
- Create a max heap from the input data. This step ensures that the largest element is at the root of the heap.
2. **Extract Elements**:
- Swap the root of the heap (maximum element) with the last element of the heap.
- Reduce the size of the heap by one.
- Heapify the root of the heap to maintain the max heap property.
3. **Repeat**:
- Continue the process until all elements are sorted.

---

### Input and Output

- **Input**:
- An array of integers to be sorted.
- **Output**:
- A sorted array in ascending order.

---

### Example

#### Input
```python
Array: [12, 11, 13, 5, 6, 7]
```
#### Output
```python
Sorted Array: [5, 6, 7, 11, 12, 13]
```
---

## Solution

Below is a Python solution for the Heap Sort algorithm.

```python
def heapify(arr, n, i):
largest = i # Initialize largest as root
left = 2 * i + 1 # left = 2*i + 1
right = 2 * i + 2 # right = 2*i + 2

# Check if left child exists and is greater than root
if left < n and arr[left] > arr[largest]:
largest = left

# Check if right child exists and is greater than largest so far
if right < n and arr[right] > arr[largest]:
largest = right

# Change root if needed
if largest != i:
arr[i], arr[largest] = arr[largest], arr[i] # Swap

# Heapify the root
heapify(arr, n, largest)

def heap_sort(arr):
n = len(arr)

# Build a max heap
for i in range(n // 2 - 1, -1, -1):
heapify(arr, n, i)

# One by one extract elements from heap
for i in range(n - 1, 0, -1):
arr[i], arr[0] = arr[0], arr[i] # Swap
heapify(arr, i, 0)

# Example usage
arr = [12, 11, 13, 5, 6, 7]
heap_sort(arr)
print("Sorted array is", arr)
```

---

### Pros and Cons

- **Pros**:
- Efficient for large datasets.
- Performs well in worst-case scenarios compared to other sorting algorithms.

- **Cons**:
- More complex to implement compared to simpler algorithms like bubble sort.
- Not stable (does not preserve the relative order of equal elements).

---

### Use Cases

- Suitable for applications where performance is critical.
- Used in implementing priority queues.
- Applicable in scenarios where the data is too large to fit into memory (external sorting).

---

### Additional Challenges

1. **Implement Min Heap**: Modify the algorithm to sort in descending order by implementing a min heap.
2. **Heap Sort with Duplicates**: Extend the algorithm to handle arrays with duplicate values while maintaining stability.

---
Loading