Gyaan

Python

All 41 notes on one page

Fundamentals

1

Variables and Data Types

beginner variables data-types dynamic-typing

In Python, a variable is just a name that points to an object in memory. We don’t need to declare types — Python figures it out at runtime. This is called dynamic typing.

Creating Variables

There’s no let, const, or var keyword. We just assign a value and Python handles the rest.

name = "Manish"       # str
age = 25              # int
height = 5.9          # float
is_active = True      # bool
nothing = None        # NoneType

The same variable can point to different types at different times. Python won’t complain.

x = 10          # x is an int
x = "hello"     # now x is a str — totally fine

Built-in Data Types

Python comes with these core types out of the box:

  • int — whole numbers like 42, -7, 1_000_000 (underscores for readability)
  • float — decimal numbers like 3.14, -0.5
  • complex — numbers with a real and imaginary part like 3 + 4j (rarely used outside math)
  • boolTrue or False (capitalized, and they’re actually subclasses of int)
  • str — text like "hello" or 'world'
  • None — Python’s version of “nothing” or “no value”

Checking Types

We can use type() to see what type something is, and isinstance() to check if it belongs to a type.

x = 42
print(type(x))              # <class 'int'>
print(isinstance(x, int))   # True

# isinstance can check multiple types at once
print(isinstance(x, (int, float)))  # True

The difference? type() gives us the exact type, while isinstance() also respects inheritance. Since bool is a subclass of int, isinstance(True, int) returns True.

Naming Conventions

Python has strong conventions (from PEP 8):

  • Variables and functionssnake_case (e.g., user_name, total_count)
  • ConstantsUPPER_SNAKE_CASE (e.g., MAX_RETRIES, API_KEY)
  • ClassesPascalCase (e.g., UserProfile)
  • Names starting with _ are “private by convention” (not enforced)
  • Names starting with __ trigger name mangling in classes
# Good
user_age = 25
MAX_CONNECTIONS = 100

# Avoid
userAge = 25       # camelCase is not Pythonic
x = 25             # too vague

In simple language, Python variables are just labels we stick on objects. The object knows its own type — the variable doesn’t care.


2

Mutable vs Immutable Types

beginner mutable immutable references

Every object in Python is either mutable (can be changed in place) or immutable (cannot be changed — any “modification” creates a new object). This distinction affects how variables behave, especially when we pass them to functions.

Which Types Are Which?

  • Immutableint, float, str, tuple, frozenset, bool, None
  • Mutablelist, dict, set, custom objects (by default)
Immutable (new object created)
x = "hello"
x "hello" id: 100
x = x + " world"
x "hello world" id: 200 (NEW)
Mutable (changed in place)
nums = [1, 2, 3]
nums [1, 2, 3] id: 300
nums.append(4)
nums [1, 2, 3, 4] id: 300 (SAME)

Seeing It in Action with id()

The id() function returns the memory address of an object. We can use it to prove whether an object changed in place or a new one was created.

# Immutable — new object every time
name = "hello"
print(id(name))    # e.g., 140234567890
name += " world"
print(id(name))    # different id — new object was created

# Mutable — same object modified
fruits = ["apple", "banana"]
print(id(fruits))  # e.g., 140234569120
fruits.append("cherry")
print(id(fruits))  # same id — object was modified in place

Why This Matters for Functions

When we pass a mutable object to a function, the function gets a reference to the same object. Changes inside the function affect the original.

def add_item(lst):
    lst.append("new")  # modifies the original list

my_list = [1, 2, 3]
add_item(my_list)
print(my_list)  # [1, 2, 3, 'new'] — original was changed!

With immutable objects, the function can’t change the original — it can only create a new object locally.

def try_change(text):
    text += " world"   # creates a new string locally
    print(text)         # "hello world"

msg = "hello"
try_change(msg)
print(msg)              # "hello" — original unchanged

In simple language, think of mutable objects like a whiteboard — we can erase and rewrite on the same board. Immutable objects are like printed paper — to change anything, we need a whole new sheet.


3

Strings and String Methods

beginner strings f-strings string-methods

Strings in Python are immutable sequences of characters. Every time we “modify” a string, Python creates a brand new string object behind the scenes.

Creating Strings

single = 'hello'
double = "hello"          # same thing — pick one style and stick with it
multi = """This is a
multi-line string"""       # triple quotes for multi-line
raw = r"C:\new\folder"    # raw string — backslashes are literal

f-strings (Formatted String Literals)

Introduced in Python 3.6, f-strings are the cleanest way to embed expressions inside strings.

name = "Manish"
age = 25
print(f"Hi, I'm {name} and I'm {age} years old.")

# We can put any expression inside the braces
print(f"Next year I'll be {age + 1}")
print(f"Name uppercased: {name.upper()}")
print(f"Pi to 2 decimals: {3.14159:.2f}")  # "3.14"

String Slicing

Strings support slicing just like lists. The syntax is string[start:stop:step].

text = "Python"
print(text[0])       # 'P'
print(text[-1])      # 'n' (last character)
print(text[0:3])     # 'Pyt' (start to stop-1)
print(text[::-1])    # 'nohtyP' (reversed — classic interview question)

Essential String Methods

Here are the methods that come up the most in interviews and everyday coding:

msg = "  Hello, World!  "

# Whitespace removal
msg.strip()          # 'Hello, World!' (both sides)
msg.lstrip()         # 'Hello, World!  ' (left only)
msg.rstrip()         # '  Hello, World!' (right only)

# Case conversion
"hello".upper()      # 'HELLO'
"HELLO".lower()      # 'hello'
"hello world".title() # 'Hello World'
"hello world".capitalize()  # 'Hello world'

# Searching
"hello".find("ll")       # 2 (index of first match, -1 if not found)
"hello".index("ll")      # 2 (same but raises ValueError if not found)
"hello".startswith("he") # True
"hello".endswith("lo")   # True
"hello hello".count("hello")  # 2

# Splitting and joining
"a,b,c".split(",")       # ['a', 'b', 'c']
" ".join(["a", "b", "c"]) # 'a b c'

# Replacing
"hello world".replace("world", "Python")  # 'hello Python'

Checking String Content

These return True or False and are super handy for validation.

"123".isdigit()      # True — only digits
"abc".isalpha()      # True — only letters
"abc123".isalnum()   # True — letters or digits
"   ".isspace()      # True — only whitespace

Strings Are Immutable

This trips up a lot of people. None of the methods above change the original string — they all return a new string.

name = "hello"
name.upper()       # returns "HELLO" but name is still "hello"
name = name.upper() # now name is "HELLO" (reassigned to new object)

In simple language, always remember that string methods return new strings. If we forget to capture the return value, nothing changes.


4

Lists, Tuples, and Sets

beginner lists tuples sets collections

Python gives us three core collection types — lists, tuples, and sets. They look similar but behave very differently. Picking the right one matters.

List [ ]
Ordered: Yes
Mutable: Yes
Duplicates: Yes
Hashable: No
Best for: ordered, changeable data
Tuple ( )
Ordered: Yes
Mutable: No
Duplicates: Yes
Hashable: Yes*
Best for: fixed data, dict keys
Set { }
Ordered: No
Mutable: Yes
Duplicates: No
Hashable: No
Best for: unique items, fast lookup
*Tuples are hashable only if all their elements are hashable

Lists

Lists are the workhorse of Python. Ordered, mutable, and can hold mixed types.

fruits = ["apple", "banana", "cherry"]
fruits.append("date")         # add to end
fruits.insert(1, "avocado")   # insert at index 1
fruits.extend(["fig", "grape"]) # add multiple items
fruits.pop()                  # remove and return last item
fruits.pop(0)                 # remove and return item at index 0
fruits.remove("banana")       # remove first occurrence by value
fruits.sort()                 # sort in place
fruits.reverse()              # reverse in place

Tuples

Tuples are like lists that can’t be changed. Once created, we can’t add, remove, or modify elements.

point = (3, 4)
colors = ("red", "green", "blue")
single = (42,)    # note the comma — without it, (42) is just an int

# Tuple unpacking — super useful
x, y = point     # x = 3, y = 4
a, *rest = (1, 2, 3, 4)  # a = 1, rest = [2, 3, 4]

# Swapping variables — classic Python trick
a, b = b, a

Since tuples are immutable (and hashable), we can use them as dictionary keys. Lists can’t do that.

Sets

Sets are unordered collections of unique elements. They’re blazing fast for membership checks (in operator) because they use hash tables internally.

nums = {1, 2, 3, 3, 3}   # {1, 2, 3} — duplicates removed
nums.add(4)               # add single item
nums.discard(2)           # remove (no error if missing)
nums.remove(1)            # remove (raises KeyError if missing)

# Set operations — these come up in interviews a lot
a = {1, 2, 3, 4}
b = {3, 4, 5, 6}
a | b    # union: {1, 2, 3, 4, 5, 6}
a & b    # intersection: {3, 4}
a - b    # difference: {1, 2}
a ^ b    # symmetric difference: {1, 2, 5, 6}

Quick Rule of Thumb

  • Need to change items frequently? Use a list.
  • Data should never change? Use a tuple.
  • Need unique items or fast lookups? Use a set.

In simple language, lists are flexible notebooks, tuples are printed receipts, and sets are collections of unique stamps — no duplicates allowed.


5

Dictionaries

beginner dictionaries dict hash-map

A dictionary is Python’s built-in key-value data structure. Think of it like a real dictionary — we look up a word (key) to find its definition (value). Under the hood, it’s a hash map, so lookups are O(1) on average.

Creating Dictionaries

# Literal syntax — most common
user = {"name": "Manish", "age": 25, "active": True}

# Using dict() constructor
user = dict(name="Manish", age=25, active=True)

# From a list of tuples
user = dict([("name", "Manish"), ("age", 25)])

# fromkeys — same value for all keys
defaults = dict.fromkeys(["a", "b", "c"], 0)  # {'a': 0, 'b': 0, 'c': 0}

Accessing Values

user = {"name": "Manish", "age": 25}

user["name"]              # "Manish" — raises KeyError if key doesn't exist
user.get("name")          # "Manish" — returns None if key doesn't exist
user.get("email", "N/A")  # "N/A" — custom default value

The get() method is almost always preferred because it won’t crash our program if a key is missing.

Essential Methods

user = {"name": "Manish", "age": 25}

user.keys()      # dict_keys(['name', 'age'])
user.values()    # dict_values(['Manish', 25])
user.items()     # dict_items([('name', 'Manish'), ('age', 25)])

# setdefault — get value if key exists, otherwise set it and return the default
user.setdefault("email", "none@example.com")
# user is now {'name': 'Manish', 'age': 25, 'email': 'none@example.com'}

# update — merge another dict into this one
user.update({"age": 26, "city": "Delhi"})

# pop — remove key and return its value
age = user.pop("age")        # 26
missing = user.pop("x", -1)  # -1 (default if key not found)

# del — remove a key (raises KeyError if missing)
del user["city"]

Iterating Over Dicts

scores = {"math": 90, "science": 85, "english": 92}

# Iterate over keys (default)
for subject in scores:
    print(subject)

# Iterate over key-value pairs — most useful
for subject, score in scores.items():
    print(f"{subject}: {score}")

Dict Comprehensions

Just like list comprehensions, but for dictionaries.

# Square numbers as values
squares = {n: n**2 for n in range(1, 6)}
# {1: 1, 2: 4, 3: 9, 4: 16, 5: 25}

# Flip keys and values
original = {"a": 1, "b": 2}
flipped = {v: k for k, v in original.items()}
# {1: 'a', 2: 'b'}

Merging Dicts (Python 3.9+)

The | operator makes merging clean and readable.

defaults = {"theme": "dark", "lang": "en"}
overrides = {"lang": "hi", "font": "mono"}

merged = defaults | overrides
# {'theme': 'dark', 'lang': 'hi', 'font': 'mono'}

# In-place merge
defaults |= overrides

defaultdict

From the collections module — it auto-creates missing keys with a default factory.

from collections import defaultdict

# Count word frequency
words = ["apple", "banana", "apple", "cherry", "banana", "apple"]
counter = defaultdict(int)  # missing keys default to 0
for word in words:
    counter[word] += 1
# defaultdict(<class 'int'>, {'apple': 3, 'banana': 2, 'cherry': 1})

In simple language, dictionaries are our go-to when we need to associate one piece of data with another. Fast, flexible, and used everywhere in Python.


6

Comprehensions

beginner comprehensions list-comprehension generator-expression

Comprehensions are one of Python’s superpowers. They let us create lists, dicts, and sets in a single readable line instead of writing a full loop. Once we get the hang of them, we’ll use them everywhere.

List Comprehensions

The basic pattern is [expression for item in iterable].

# Without comprehension
squares = []
for n in range(1, 6):
    squares.append(n ** 2)

# With comprehension — same result, one line
squares = [n ** 2 for n in range(1, 6)]
# [1, 4, 9, 16, 25]

Adding Conditions

We can filter items with an if clause.

# Only even numbers
evens = [n for n in range(10) if n % 2 == 0]
# [0, 2, 4, 6, 8]

# Only words longer than 3 characters
words = ["hi", "hello", "hey", "howdy"]
long_words = [w for w in words if len(w) > 3]
# ['hello', 'howdy']

We can also use if-else — but notice it goes before the for, not after.

labels = ["even" if n % 2 == 0 else "odd" for n in range(5)]
# ['even', 'odd', 'even', 'odd', 'even']

Nested Comprehensions

We can flatten nested loops into a single comprehension. The order reads left to right, just like nested for loops.

# Flatten a 2D list
matrix = [[1, 2], [3, 4], [5, 6]]
flat = [num for row in matrix for num in row]
# [1, 2, 3, 4, 5, 6]

# All (x, y) pairs
pairs = [(x, y) for x in range(3) for y in range(3)]
# [(0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2)]

Dict Comprehensions

Same idea, but we produce key-value pairs.

names = ["alice", "bob", "charlie"]
name_lengths = {name: len(name) for name in names}
# {'alice': 5, 'bob': 3, 'charlie': 7}

# Filter while building
scores = {"math": 90, "art": 60, "science": 85}
passed = {k: v for k, v in scores.items() if v >= 70}
# {'math': 90, 'science': 85}

Set Comprehensions

Like list comprehensions but with curly braces. Duplicates are automatically removed.

words = ["hello", "world", "hello", "python"]
unique_lengths = {len(w) for w in words}
# {5, 6} — only unique lengths

Generator Expressions (Lazy Comprehensions)

If we use parentheses instead of brackets, we get a generator expression. It doesn’t build the whole list in memory — it produces values one at a time.

# This creates a list in memory
total = sum([n ** 2 for n in range(1_000_000)])

# This is lazy — uses almost no memory
total = sum(n ** 2 for n in range(1_000_000))

The only difference is [] vs (). Generator expressions are great when we’re passing data to a function like sum(), max(), or any() and don’t need the intermediate list.

When NOT to Use Comprehensions

Comprehensions are awesome, but they can hurt readability when:

  • The logic is complex (multiple conditions, nested transformations)
  • We need side effects (like printing or modifying external state)
  • The line gets too long (if we need to scroll, use a loop instead)

In simple language, if we can say “give me X for each Y” in English, it probably fits in a comprehension. If we need to explain it in a paragraph, use a regular loop.


7

Type Conversion and Truthiness

beginner type-conversion truthiness falsy casting

Python doesn’t do much implicit type conversion compared to JavaScript. Most of the time, we need to convert types explicitly. And when it comes to boolean checks, Python has very clear rules about what’s “truthy” and what’s “falsy”.

Explicit Type Conversion

We convert between types using built-in functions.

# To int
int("42")        # 42
int(3.9)         # 3 (truncates, does NOT round)
int(True)        # 1
int("0b1010", 2) # 10 (binary string to int)

# To float
float("3.14")    # 3.14
float(42)        # 42.0

# To string
str(42)          # "42"
str(3.14)        # "3.14"
str([1, 2, 3])   # "[1, 2, 3]"

# To list / tuple
list("hello")         # ['h', 'e', 'l', 'l', 'o']
list((1, 2, 3))       # [1, 2, 3]
tuple([1, 2, 3])      # (1, 2, 3)
list({1, 2, 3})       # [1, 2, 3] (order not guaranteed)

If a conversion doesn’t make sense, Python raises a ValueError.

int("hello")   # ValueError: invalid literal for int()

Falsy Values

These are the values that evaluate to False in a boolean context. Everything else is truthy.

  • False — the boolean itself
  • None — Python’s null
  • 0 — zero (int)
  • 0.0 — zero (float)
  • "" — empty string
  • [] — empty list
  • {} — empty dict
  • set() — empty set
  • () — empty tuple
# All of these print "falsy"
for val in [False, None, 0, 0.0, "", [], {}, set(), ()]:
    if not val:
        print(f"{val!r} is falsy")

This is why we can write clean checks like if my_list: instead of if len(my_list) > 0:.

is vs ==

This trips up a lot of people.

  • == checks if two objects have the same value
  • is checks if two variables point to the same object in memory
a = [1, 2, 3]
b = [1, 2, 3]
a == b   # True — same value
a is b   # False — different objects in memory

c = a
a is c   # True — same object (c is just another name for a)

The rule: use is only for None, True, and False. For everything else, use ==.

# Good
if x is None:
    pass

# Bad — don't do this
if x is 42:  # unreliable, even if it sometimes works
    pass

Short-Circuit Evaluation

Here’s something that surprises many people. Python’s and and or don’t just return True or False — they return the actual value that determined the result.

# `or` returns the first truthy value (or the last value if all falsy)
"hello" or "world"   # "hello" (first truthy)
"" or "fallback"     # "fallback" (first is falsy, so second)
0 or "" or None      # None (all falsy, returns last)

# `and` returns the first falsy value (or the last value if all truthy)
"hello" and "world"  # "world" (all truthy, returns last)
"" and "world"       # "" (first falsy, short-circuits)

This pattern is commonly used for default values.

name = user_input or "Anonymous"  # if user_input is empty, use "Anonymous"

In simple language, Python treats “empty” things as False and “non-empty” things as True. And and/or are smarter than we might expect — they return actual values, not just booleans.


Functions

8

Functions and Arguments

beginner functions args kwargs parameters

Functions are the building blocks of any Python program. We define them with def, and they can accept a flexible variety of arguments.

Basic Syntax

def greet(name):
    """Return a greeting message."""  # docstring — always a good habit
    return f"Hello, {name}!"

result = greet("Manish")  # "Hello, Manish!"

If we don’t explicitly return something, the function returns None. And yes, return and print are very different things — return sends a value back to the caller, print just displays text on screen.

Positional vs Keyword Arguments

def create_user(name, age, city):
    return {"name": name, "age": age, "city": city}

# Positional — order matters
create_user("Manish", 25, "Delhi")

# Keyword — order doesn't matter
create_user(city="Delhi", name="Manish", age=25)

# Mix of both — positional args must come first
create_user("Manish", city="Delhi", age=25)

Default Argument Values

We can give parameters default values. Parameters with defaults must come after those without.

def connect(host, port=5432, timeout=30):
    print(f"Connecting to {host}:{port} (timeout: {timeout}s)")

connect("localhost")              # uses defaults: port=5432, timeout=30
connect("localhost", port=3306)   # overrides port only

The Mutable Default Argument Trap

This is a classic interview question. Default arguments are evaluated once when the function is defined, not each time it’s called.

# BAD — the same list is shared across all calls
def add_item(item, items=[]):
    items.append(item)
    return items

print(add_item("a"))  # ['a']
print(add_item("b"))  # ['a', 'b'] — surprise! It remembered the old list

# GOOD — use None as default and create a new list inside
def add_item(item, items=None):
    if items is None:
        items = []
    items.append(item)
    return items

*args and **kwargs

*args collects extra positional arguments into a tuple. **kwargs collects extra keyword arguments into a dict.

def log(message, *args, **kwargs):
    print(f"MSG: {message}")
    print(f"Extra args: {args}")       # tuple
    print(f"Extra kwargs: {kwargs}")   # dict

log("hello", 1, 2, 3, level="info", source="api")
# MSG: hello
# Extra args: (1, 2, 3)
# Extra kwargs: {'level': 'info', 'source': 'api'}

The names args and kwargs are just convention. We could call them *stuff and **options — the * and ** are what matter.

Argument Unpacking

The * and ** operators can also be used when calling functions to unpack sequences and dicts.

def add(a, b, c):
    return a + b + c

nums = [1, 2, 3]
add(*nums)         # same as add(1, 2, 3)

config = {"a": 10, "b": 20, "c": 30}
add(**config)      # same as add(a=10, b=20, c=30)

Parameter Order Rule

When mixing all types of parameters, the order must be:

  1. Regular positional parameters
  2. *args
  3. Keyword-only parameters (anything after *args)
  4. **kwargs
def example(a, b, *args, option=True, **kwargs):
    pass

In simple language, *args gives us a tuple of “all the extras”, and **kwargs gives us a dict of “all the named extras”. Together, they make our functions incredibly flexible.


9

Lambda Functions

beginner lambda anonymous-functions functional

A lambda is a small anonymous function — a function without a name. It can take any number of arguments but can only have a single expression. Think of it as a shortcut for tiny throwaway functions.

Basic Syntax

# Regular function
def double(x):
    return x * 2

# Lambda equivalent
double = lambda x: x * 2

double(5)  # 10

Notice there’s no return keyword. The expression after the colon is automatically returned.

Where Lambdas Actually Shine

We rarely assign lambdas to variables (that defeats the purpose — just use def). Their real power is as inline arguments to functions like sorted(), map(), and filter().

Sorting with a Custom Key

users = [
    {"name": "Charlie", "age": 30},
    {"name": "Alice", "age": 25},
    {"name": "Bob", "age": 28},
]

# Sort by age
sorted_users = sorted(users, key=lambda u: u["age"])
# [Alice(25), Bob(28), Charlie(30)]

# Sort by name length
sorted_users = sorted(users, key=lambda u: len(u["name"]))

With map() and filter()

nums = [1, 2, 3, 4, 5]

# Double each number
doubled = list(map(lambda x: x * 2, nums))
# [2, 4, 6, 8, 10]

# Keep only even numbers
evens = list(filter(lambda x: x % 2 == 0, nums))
# [2, 4]

Multiple Arguments

Lambdas can take multiple arguments, separated by commas.

add = lambda a, b: a + b
add(3, 4)  # 7

# Sorting tuples by second element
pairs = [(1, 'b'), (3, 'a'), (2, 'c')]
sorted(pairs, key=lambda p: p[1])
# [(3, 'a'), (1, 'b'), (2, 'c')]

Limitations

Lambdas can only contain a single expression. No statements, no assignments, no multi-line logic.

# This is NOT allowed
bad = lambda x: if x > 0: return x  # SyntaxError

# This IS allowed (conditional expression)
absolute = lambda x: x if x >= 0 else -x

Lambda vs def — When to Use Which

  • Use lambda when we need a short, one-off function as an argument (like a sort key)
  • Use def for everything else — named functions, multi-line logic, functions we’ll reuse
# Good use of lambda — short, inline, throwaway
sorted(items, key=lambda x: x.priority)

# Bad use of lambda — just use def
process = lambda x, y: x ** 2 + y ** 2 - 2 * x * y  # hard to read

Common Interview Pattern

We might be asked to sort a list of strings by their last character.

words = ["banana", "apple", "cherry"]
sorted(words, key=lambda w: w[-1])
# ['banana', 'apple', 'cherry'] → sorted by 'a', 'e', 'y'

In simple language, lambdas are one-line functions we write when creating a full def feels like overkill. The only difference is they can only do one thing — one expression, no more.


10

Map, Filter, Reduce, Zip

intermediate map filter reduce zip functional

Python has a handful of functional programming tools that let us transform and combine data without writing explicit loops. They take a function and an iterable, and produce a new iterable.

map() — Transform Every Item

map() applies a function to each item in an iterable. It returns a lazy iterator (not a list), so we wrap it in list() when we need the result.

nums = [1, 2, 3, 4, 5]

# Square each number
squared = list(map(lambda x: x ** 2, nums))
# [1, 4, 9, 16, 25]

# Convert strings to ints
str_nums = ["10", "20", "30"]
int_nums = list(map(int, str_nums))
# [10, 20, 30]

# map with multiple iterables
a = [1, 2, 3]
b = [10, 20, 30]
sums = list(map(lambda x, y: x + y, a, b))
# [11, 22, 33]

filter() — Keep Items That Pass a Test

filter() keeps only the items where the function returns a truthy value.

nums = [1, 2, 3, 4, 5, 6, 7, 8]

# Keep even numbers
evens = list(filter(lambda x: x % 2 == 0, nums))
# [2, 4, 6, 8]

# Remove empty strings
words = ["hello", "", "world", "", "python"]
non_empty = list(filter(None, words))  # None removes falsy values
# ['hello', 'world', 'python']

Passing None as the function is a neat trick — it filters out all falsy values.

reduce() — Combine All Items Into One

reduce() isn’t a built-in — we need to import it from functools. It takes the first two items, applies the function, then takes that result with the next item, and so on until there’s a single value left.

from functools import reduce

nums = [1, 2, 3, 4, 5]

# Sum all numbers (1+2=3, 3+3=6, 6+4=10, 10+5=15)
total = reduce(lambda acc, x: acc + x, nums)
# 15

# Find the max value
biggest = reduce(lambda a, b: a if a > b else b, nums)
# 5

# Flatten a list of lists
nested = [[1, 2], [3, 4], [5, 6]]
flat = reduce(lambda a, b: a + b, nested)
# [1, 2, 3, 4, 5, 6]

Honestly, most of the time we’re better off using sum(), max(), or a loop instead of reduce(). It’s powerful but can be hard to read.

zip() — Combine Iterables in Parallel

zip() takes multiple iterables and pairs up their elements. It stops at the shortest iterable.

names = ["Alice", "Bob", "Charlie"]
scores = [90, 85, 92]

# Pair them up
pairs = list(zip(names, scores))
# [('Alice', 90), ('Bob', 85), ('Charlie', 92)]

# Super useful with dict()
score_map = dict(zip(names, scores))
# {'Alice': 90, 'Bob': 85, 'Charlie': 92}

# Unzip with *
pairs = [("a", 1), ("b", 2), ("c", 3)]
letters, numbers = zip(*pairs)
# letters = ('a', 'b', 'c'), numbers = (1, 2, 3)

If we need to handle iterables of different lengths without truncating, we use zip_longest from itertools.

from itertools import zip_longest

a = [1, 2, 3]
b = ["x", "y"]

list(zip_longest(a, b, fillvalue="-"))
# [(1, 'x'), (2, 'y'), (3, '-')]

enumerate() — Get Index + Value

Not exactly functional programming, but it pairs perfectly with these tools. It gives us the index and value while iterating.

fruits = ["apple", "banana", "cherry"]
for i, fruit in enumerate(fruits):
    print(f"{i}: {fruit}")
# 0: apple
# 1: banana
# 2: cherry

# Start from a different index
for i, fruit in enumerate(fruits, start=1):
    print(f"{i}: {fruit}")

Comprehensions vs map/filter

In most cases, comprehensions are more Pythonic and readable.

# These are equivalent
list(map(lambda x: x ** 2, nums))   # map style
[x ** 2 for x in nums]              # comprehension — cleaner

list(filter(lambda x: x > 3, nums)) # filter style
[x for x in nums if x > 3]          # comprehension — cleaner

In simple language, map transforms, filter selects, reduce combines, and zip pairs up. But if we can write it as a comprehension, that’s usually the more Pythonic choice.


11

Closures and Nonlocal

intermediate closures nonlocal scope first-class-functions

A closure is a function that remembers the variables from its enclosing scope, even after that scope has finished executing. To understand closures, we first need to know that functions in Python are first-class objects — we can pass them around, return them from other functions, and assign them to variables.

First-Class Functions

def greet(name):
    return f"Hello, {name}!"

# Assign a function to a variable
say_hello = greet
say_hello("Manish")  # "Hello, Manish!"

# Pass a function as an argument
def call_twice(func, arg):
    return func(arg) + " " + func(arg)

call_twice(greet, "World")  # "Hello, World! Hello, World!"

Inner Functions

We can define functions inside other functions. The inner function has access to the outer function’s variables.

def outer():
    message = "Hello from outer"

    def inner():
        print(message)  # can access outer's variable

    inner()

outer()  # "Hello from outer"

What Makes It a Closure?

A closure happens when we return the inner function, and it keeps a reference to the enclosing variables even after the outer function has finished running.

Closure Scope Chain
Global Scope
outer(multiplier=3)
multiplier = 3 ← captured by closure
inner(x)
return x * multiplier
↑ Looks up multiplier from enclosing scope
return inner ← inner remembers multiplier=3
def make_multiplier(multiplier):
    def multiply(x):
        return x * multiplier  # remembers multiplier from outer scope
    return multiply  # return the function, don't call it

double = make_multiplier(2)
triple = make_multiplier(3)

double(5)   # 10 — multiplier=2 is remembered
triple(5)   # 15 — multiplier=3 is remembered

Even though make_multiplier has finished executing, the returned multiply function still has access to the multiplier variable. That’s a closure.

The nonlocal Keyword

By default, an inner function can read variables from the enclosing scope but can’t reassign them. If we try, Python creates a new local variable instead. The nonlocal keyword lets us modify the enclosing variable.

def counter():
    count = 0

    def increment():
        nonlocal count     # without this, we'd get an UnboundLocalError
        count += 1
        return count

    return increment

tick = counter()
tick()  # 1
tick()  # 2
tick()  # 3 — count persists between calls

Without nonlocal, Python would think count += 1 is trying to read a local variable count before it’s been assigned.

Practical Uses

Factory Functions

Closures are great for creating specialized functions.

def make_greeter(greeting):
    def greet(name):
        return f"{greeting}, {name}!"
    return greet

casual = make_greeter("Hey")
formal = make_greeter("Good evening")

casual("Manish")   # "Hey, Manish!"
formal("Manish")   # "Good evening, Manish!"

Data Hiding

Closures can act like lightweight objects, encapsulating state without a class.

def bank_account(initial_balance):
    balance = initial_balance

    def transact(amount):
        nonlocal balance
        balance += amount
        return balance

    return transact

account = bank_account(100)
account(50)    # 150 (deposit)
account(-30)   # 120 (withdrawal)

In simple language, a closure is an inner function that carries a little backpack of variables from its parent function. Even after the parent is gone, the backpack stays.


12

Decorators

intermediate decorators wrapper functools

A decorator is simply a function that takes another function and extends its behavior. Think of it like wrapping a gift — the gift (original function) is the same, but now it has extra packaging (the decorator logic).

The Core Idea

Before we look at the @ syntax, let’s understand what’s actually happening.

Decorator Wrapping Flow
original_func
our function
decorator(original_func)
takes in original
wrapper_func
adds behavior + calls original
When we call original_func(), we're actually calling wrapper_func()
def my_decorator(func):
    def wrapper(*args, **kwargs):
        print("Before the function runs")
        result = func(*args, **kwargs)   # call the original
        print("After the function runs")
        return result
    return wrapper

def say_hello():
    print("Hello!")

# Manual decoration
say_hello = my_decorator(say_hello)
say_hello()
# Before the function runs
# Hello!
# After the function runs

The @ Syntax Sugar

The @decorator syntax is just a cleaner way to write func = decorator(func).

@my_decorator
def say_hello():
    print("Hello!")

# This is EXACTLY the same as:
# say_hello = my_decorator(say_hello)

Always Use functools.wraps

Without functools.wraps, the wrapper replaces the original function’s name and docstring. This breaks introspection and debugging.

from functools import wraps

def my_decorator(func):
    @wraps(func)  # preserves func.__name__, func.__doc__, etc.
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper

A Practical Example: Timing Decorator

import time
from functools import wraps

def timer(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        start = time.perf_counter()
        result = func(*args, **kwargs)
        elapsed = time.perf_counter() - start
        print(f"{func.__name__} took {elapsed:.4f}s")
        return result
    return wrapper

@timer
def slow_function():
    time.sleep(1)

slow_function()  # "slow_function took 1.0012s"

Decorators with Arguments

If we want our decorator to accept parameters, we need an extra layer of nesting.

def repeat(n):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            for _ in range(n):
                result = func(*args, **kwargs)
            return result
        return wrapper
    return decorator

@repeat(3)
def greet(name):
    print(f"Hello, {name}!")

greet("Manish")  # prints "Hello, Manish!" three times

Stacking Decorators

We can apply multiple decorators. They run bottom to top (closest to the function first).

@decorator_a
@decorator_b
def my_func():
    pass

# Same as: my_func = decorator_a(decorator_b(my_func))

Common Built-in Decorators

  • @property — turns a method into a read-only attribute
  • @staticmethod — method that doesn’t need self or cls
  • @classmethod — method that receives the class (cls) instead of the instance
  • @functools.lru_cache — memoization (caches return values)
class Circle:
    def __init__(self, radius):
        self._radius = radius

    @property
    def area(self):
        return 3.14159 * self._radius ** 2

c = Circle(5)
c.area  # 78.53975 — accessed like an attribute, no parentheses

In simple language, a decorator wraps a function with extra behavior. The @ syntax is just a shortcut. And always use @wraps to keep the original function’s identity intact.


13

Generators and Iterators

intermediate generators iterators yield lazy-evaluation

In simple language, a generator is a function that can pause and resume. Instead of returning all values at once, it yields them one at a time. This makes generators incredibly memory-efficient for large datasets.

But to understand generators, we need to start with iterators — the protocol they’re built on.

The Iterator Protocol

Any object in Python is an iterator if it implements two methods:

  • __iter__() — returns the iterator object itself
  • __next__() — returns the next value, raises StopIteration when done
# Under the hood, a for loop does this:
nums = [1, 2, 3]
it = iter(nums)       # calls nums.__iter__()
next(it)              # 1 — calls it.__next__()
next(it)              # 2
next(it)              # 3
next(it)              # raises StopIteration

Building an Iterator with a Class

We can create custom iterators, but it takes some boilerplate.

class Countdown:
    def __init__(self, start):
        self.current = start

    def __iter__(self):
        return self

    def __next__(self):
        if self.current <= 0:
            raise StopIteration
        val = self.current
        self.current -= 1
        return val

for n in Countdown(3):
    print(n)  # 3, 2, 1

That’s a lot of code for something simple. Generators make this much easier.

Generator Functions with yield

A generator function looks like a normal function but uses yield instead of return. Each time we call next(), it runs until the next yield, pauses, and gives us the value.

Generator Lifecycle
Created
gen = func()
next()
Running
executes code
yield
Suspended
paused, value sent
next()
Running
resumes
return/end
Completed
StopIteration
def countdown(n):
    while n > 0:
        yield n    # pause here, give n to the caller
        n -= 1     # resume here on next call

gen = countdown(3)
next(gen)  # 3
next(gen)  # 2
next(gen)  # 1
next(gen)  # StopIteration

# Or just use a for loop
for n in countdown(3):
    print(n)  # 3, 2, 1

Generator Expressions

Just like list comprehensions but with parentheses. They produce values lazily.

# List comprehension — builds entire list in memory
squares_list = [x ** 2 for x in range(1_000_000)]

# Generator expression — produces one value at a time
squares_gen = (x ** 2 for x in range(1_000_000))

# Perfect for passing to functions
sum(x ** 2 for x in range(1_000_000))  # no extra memory

Memory Benefits

This is the biggest win. A list of 10 million items takes megabytes of memory. A generator that yields 10 million items takes almost nothing.

# This creates a massive list in memory
big_list = [x for x in range(10_000_000)]

# This uses almost no memory
big_gen = (x for x in range(10_000_000))

send() and close()

We can send values back into a generator and close it early.

def accumulator():
    total = 0
    while True:
        value = yield total
        if value is None:
            break
        total += value

gen = accumulator()
next(gen)          # 0 — prime the generator
gen.send(10)       # 10
gen.send(20)       # 30
gen.close()        # stop the generator

yield from

When a generator needs to yield all values from another iterable, yield from is cleaner than a loop.

def chain(*iterables):
    for it in iterables:
        yield from it  # same as: for item in it: yield item

list(chain([1, 2], [3, 4], [5, 6]))
# [1, 2, 3, 4, 5, 6]

Infinite Sequences

Generators can produce values forever since they only compute on demand.

def fibonacci():
    a, b = 0, 1
    while True:
        yield a
        a, b = b, a + b

# Take first 10 fibonacci numbers
from itertools import islice
list(islice(fibonacci(), 10))
# [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

In simple language, generators are lazy functions. They do the minimum work possible, computing values only when asked. When we’re dealing with large data or infinite sequences, generators are the way to go.


14

Built-in Functions

beginner builtins enumerate sorted any all

Python comes with a rich set of built-in functions that we don’t need to import. Knowing them saves us from reinventing the wheel and makes our code cleaner.

enumerate() — Index + Value Together

Instead of tracking an index manually, enumerate() gives us both.

fruits = ["apple", "banana", "cherry"]

for i, fruit in enumerate(fruits):
    print(f"{i}: {fruit}")

# Start counting from 1 instead of 0
for i, fruit in enumerate(fruits, start=1):
    print(f"{i}. {fruit}")

sorted() vs .sort()

sorted() returns a new sorted list. .sort() sorts a list in place and returns None.

nums = [3, 1, 4, 1, 5]

sorted_nums = sorted(nums)  # [1, 1, 3, 4, 5] — nums is unchanged
nums.sort()                  # None — but nums is now [1, 1, 3, 4, 5]

# Reverse sort
sorted(nums, reverse=True)  # [5, 4, 3, 1, 1]

# Sort by custom key
words = ["banana", "apple", "fig"]
sorted(words, key=len)  # ['fig', 'apple', 'banana']

The key difference: sorted() works on any iterable (strings, tuples, sets, dicts), while .sort() only works on lists.

reversed()

Returns a reverse iterator. Doesn’t modify the original.

for n in reversed([1, 2, 3]):
    print(n)  # 3, 2, 1

# Convert to list if needed
list(reversed([1, 2, 3]))  # [3, 2, 1]

any() and all()

These are incredibly useful for checking conditions across an iterable.

  • any() — returns True if at least one element is truthy
  • all() — returns True if every element is truthy
nums = [0, 1, 2, 3]
any(nums)  # True — at least one non-zero
all(nums)  # False — 0 is falsy

# With conditions
scores = [85, 92, 78, 90]
all(s >= 70 for s in scores)  # True — everyone passed
any(s == 100 for s in scores) # False — no perfect score

isinstance() and type()

isinstance() checks if an object is an instance of a type (respects inheritance). type() gives the exact type.

x = 42
isinstance(x, int)           # True
isinstance(x, (int, float))  # True — check multiple types

type(x)          # <class 'int'>
type(x) is int   # True — exact match only

Use isinstance() in most cases. Use type() only when we need the exact type without inheritance.

id() and hash()

id() returns the memory address of an object. hash() returns the hash value (used in dicts and sets).

x = "hello"
id(x)     # e.g., 140234567890 — unique memory address
hash(x)   # e.g., 8464330393063589907 — hash value

# Only immutable types are hashable
hash([1, 2])  # TypeError — lists aren't hashable

len(), range(), abs(), round()

The everyday workhorses.

len([1, 2, 3])        # 3
len("hello")           # 5
len({"a": 1, "b": 2}) # 2

list(range(5))         # [0, 1, 2, 3, 4]
list(range(2, 8))      # [2, 3, 4, 5, 6, 7]
list(range(0, 10, 2))  # [0, 2, 4, 6, 8]

abs(-42)      # 42
round(3.14159, 2)  # 3.14
round(2.5)         # 2 — banker's rounding (rounds to even)

Watch out: round(2.5) returns 2, not 3. Python uses banker’s rounding (round half to even) to reduce bias.

min() and max() with key

These accept an optional key argument, just like sorted().

nums = [3, -7, 2, -4, 5]

min(nums)            # -7
max(nums)            # 5

# By absolute value
min(nums, key=abs)   # 2
max(nums, key=abs)   # -7

# With dicts
users = [{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]
youngest = min(users, key=lambda u: u["age"])
# {'name': 'Bob', 'age': 25}

repr() vs str()

str() gives a human-readable string. repr() gives an unambiguous developer-friendly string.

s = "hello\nworld"
print(str(s))    # hello
                 # world
print(repr(s))   # 'hello\nworld'

# In f-strings, !r uses repr
name = "Manish"
f"{name!r}"  # "'Manish'" — with quotes

input()

Reads a line from the user. Always returns a string.

name = input("What's your name? ")     # returns str
age = int(input("How old are you? "))   # convert to int ourselves

In simple language, Python’s built-in functions are our Swiss Army knife. Learning them well means writing less code and solving problems faster. When in doubt, check if there’s a built-in for it first.


Object-Oriented Python

15

Classes and Objects

beginner classes objects OOP init

A class is a blueprint for creating objects. Think of it like a cookie cutter — the class defines the shape, and each cookie we stamp out is an object (also called an instance).

Defining a Class

We use the class keyword. The __init__ method runs automatically when we create a new object — it’s where we set up the initial state.

class Dog:
    species = "Canis familiaris"  # class attribute — shared by all dogs

    def __init__(self, name, age):
        self.name = name  # instance attribute — unique to each dog
        self.age = age

    def bark(self):
        return f"{self.name} says Woof!"

What Is self?

Every method in a class receives self as its first argument. It’s a reference to the current instance. In simple language, self is how the object talks about itself — “my name”, “my age”.

Python passes self automatically. We never need to do it manually.

Creating Objects

We call the class like a function. Python creates the object, then calls __init__ on it.

buddy = Dog("Buddy", 3)
max = Dog("Max", 5)

print(buddy.name)    # Buddy
print(max.bark())    # Max says Woof!
print(buddy.species) # Canis familiaris — shared across all instances

Instance vs Class Attributes

  • Class attributes are defined directly in the class body (like species above). They’re shared by every instance.
  • Instance attributes are defined inside __init__ with self.something. Each object gets its own copy.
buddy.species = "Robot Dog"  # creates an instance attribute, shadows the class one
print(buddy.species)  # Robot Dog
print(max.species)    # Canis familiaris — unchanged

__str__ and __repr__

By default, printing an object gives us something ugly like <__main__.Dog object at 0x...>. We can fix that with two special methods:

  • __str__ — the “pretty” version for end users (what print() uses)
  • __repr__ — the “developer” version (what the REPL shows, should ideally be unambiguous)
class Dog:
    def __init__(self, name, age):
        self.name = name
        self.age = age

    def __str__(self):
        return f"{self.name}, {self.age} years old"

    def __repr__(self):
        return f"Dog(name='{self.name}', age={self.age})"

buddy = Dog("Buddy", 3)
print(buddy)       # Buddy, 3 years old  (__str__)
print(repr(buddy)) # Dog(name='Buddy', age=3)  (__repr__)

In simple language, a class is just a way to bundle data and behavior together. We define the template once, then stamp out as many objects as we need — each with their own state but sharing the same methods.


16

Inheritance and MRO

intermediate inheritance MRO super diamond-problem

Inheritance lets a class borrow attributes and methods from another class. The parent is called the base class, and the child is the derived class. Think of it like genetics — a child inherits traits from their parents.

Single Inheritance

The simplest form. One child, one parent.

class Animal:
    def __init__(self, name):
        self.name = name

    def speak(self):
        return "..."

class Dog(Animal):  # Dog inherits from Animal
    def speak(self):
        return f"{self.name} says Woof!"

dog = Dog("Buddy")
print(dog.speak())  # Buddy says Woof!
print(dog.name)     # Buddy — inherited from Animal

super() — Calling the Parent

Instead of hardcoding the parent class name, we use super(). It gives us a reference to the parent so we can call its methods.

class Dog(Animal):
    def __init__(self, name, breed):
        super().__init__(name)  # call Animal's __init__
        self.breed = breed

Multiple Inheritance and the Diamond Problem

Python allows a class to inherit from multiple parents. This is powerful but can get confusing — especially with the diamond problem.

Diamond Inheritance — MRO Resolution
A
base class
/     \
B(A)
MRO #2
C(A)
MRO #3
\     /
D(B, C)
MRO #1 — starts here
MRO: D → B → C → A → object

The diamond problem: if both B and C inherit from A, and D inherits from both B and C — which version of A’s methods does D get?

C3 Linearization (MRO)

Python solves this with C3 linearization — an algorithm that creates a predictable method lookup order. We can inspect it:

class A:
    def who(self): return "A"

class B(A):
    def who(self): return "B"

class C(A):
    def who(self): return "C"

class D(B, C):
    pass

print(D().who())   # B — because B comes before C in MRO
print(D.mro())     # [D, B, C, A, object]

The rule is: Python checks the class itself first, then left-to-right through the parents, making sure a parent isn’t visited before all its children.

isinstance and issubclass

print(isinstance(D(), B))    # True — D is a child of B
print(isinstance(D(), A))    # True — D is also a child of A (transitive)
print(issubclass(D, A))      # True — the class D itself is a subclass of A

Mixins

A mixin is a small class meant to be combined with others via multiple inheritance. It adds a specific behavior without being a standalone class.

class JsonMixin:
    def to_json(self):
        import json
        return json.dumps(self.__dict__)

class User(JsonMixin):
    def __init__(self, name):
        self.name = name

print(User("Manish").to_json())  # {"name": "Manish"}

In simple language, inheritance lets us reuse code by building on existing classes. When multiple parents are involved, Python uses MRO (C3 linearization) to decide which method to call — always predictable, always left-to-right.


17

Dunder (Magic) Methods

intermediate dunder magic-methods operator-overloading

Dunder methods (short for double underscore) are special methods that Python calls behind the scenes. They let us define how our objects behave with built-in operations like +, len(), print(), and even in.

Think of them like hooks — Python gives us specific spots to plug in custom behavior.

Object Basics

We’ve already seen __init__. Here are the other essentials:

class Book:
    def __init__(self, title, pages):
        self.title = title
        self.pages = pages

    def __str__(self):          # print(book) — friendly output
        return f"'{self.title}' ({self.pages} pages)"

    def __repr__(self):         # repr(book) — developer output
        return f"Book('{self.title}', {self.pages})"

    def __len__(self):          # len(book)
        return self.pages

The only difference between __str__ and __repr__: __str__ is for humans, __repr__ is for developers. If only one is defined, Python falls back to __repr__ for everything.

Comparison Methods

These let us use ==, <, >, etc. with our objects.

class Book:
    def __init__(self, title, pages):
        self.title = title
        self.pages = pages

    def __eq__(self, other):    # book1 == book2
        return self.pages == other.pages

    def __lt__(self, other):    # book1 < book2
        return self.pages < other.pages

    def __gt__(self, other):    # book1 > book2
        return self.pages > other.pages

Pro tip: if we define __eq__ and __lt__, we can use functools.total_ordering to auto-generate the rest (<=, >=).

Arithmetic Methods

We can make our objects work with +, *, and more.

class Vector:
    def __init__(self, x, y):
        self.x, self.y = x, y

    def __add__(self, other):   # v1 + v2
        return Vector(self.x + other.x, self.y + other.y)

    def __mul__(self, scalar):  # v1 * 3
        return Vector(self.x * scalar, self.y * scalar)

    def __repr__(self):
        return f"Vector({self.x}, {self.y})"

v = Vector(1, 2) + Vector(3, 4)
print(v)  # Vector(4, 6)

Container Methods

These make our objects behave like lists or dicts.

class Playlist:
    def __init__(self, songs):
        self._songs = songs

    def __getitem__(self, index):   # playlist[0]
        return self._songs[index]

    def __setitem__(self, index, value):  # playlist[0] = "new song"
        self._songs[index] = value

    def __contains__(self, item):   # "song" in playlist
        return item in self._songs

    def __len__(self):              # len(playlist)
        return len(self._songs)

Callable Objects

__call__ lets us use an object like a function.

class Multiplier:
    def __init__(self, factor):
        self.factor = factor

    def __call__(self, value):
        return value * self.factor

double = Multiplier(2)
print(double(5))   # 10 — calling the object like a function

Context Manager Methods

__enter__ and __exit__ let our objects work with the with statement.

class Timer:
    def __enter__(self):
        import time
        self.start = time.time()
        return self

    def __exit__(self, *args):
        import time
        print(f"Elapsed: {time.time() - self.start:.2f}s")

with Timer():
    sum(range(1_000_000))  # Elapsed: 0.03s

__hash__

If we define __eq__, Python automatically sets __hash__ to None (making the object unhashable). To use our objects in sets or as dict keys, we need to define __hash__ too.

class Point:
    def __init__(self, x, y):
        self.x, self.y = x, y

    def __eq__(self, other):
        return self.x == other.x and self.y == other.y

    def __hash__(self):
        return hash((self.x, self.y))

In simple language, dunder methods are Python’s way of letting us teach our objects how to behave with built-in operations. Almost everything in Python — from + to len() to for loops — is powered by dunder methods under the hood.


18

@staticmethod vs @classmethod

intermediate staticmethod classmethod methods

Python has three types of methods inside a class, and the only difference is what they get access to. Let’s break them down.

Instance Method
def method(self)
Has access to self (the instance) and the class via self.__class__
Class Method
def method(cls)
Has access to cls (the class itself), not the instance
Static Method
def method()
Has access to nothing — just a regular function that lives inside the class

Instance Methods (Regular)

The default. They take self as the first argument, giving access to the instance’s data.

class Pizza:
    def __init__(self, size, toppings):
        self.size = size
        self.toppings = toppings

    def describe(self):  # instance method
        return f"{self.size} pizza with {', '.join(self.toppings)}"

p = Pizza("large", ["mushrooms", "olives"])
print(p.describe())  # large pizza with mushrooms, olives

Class Methods (@classmethod)

They take cls instead of self. They work on the class, not a specific instance. The most common use is the factory pattern — alternative ways to create objects.

class Pizza:
    def __init__(self, size, toppings):
        self.size = size
        self.toppings = toppings

    @classmethod
    def margherita(cls):  # factory method
        return cls("medium", ["mozzarella", "tomato", "basil"])

    @classmethod
    def pepperoni(cls):
        return cls("large", ["mozzarella", "pepperoni"])

p = Pizza.margherita()  # no need to remember exact toppings

Notice we use cls(...) instead of Pizza(...). This matters for inheritance — if a subclass calls margherita(), cls will be the subclass, not Pizza.

Static Methods (@staticmethod)

They don’t take self or cls. They’re just regular functions that happen to live inside the class because they’re logically related.

class Pizza:
    @staticmethod
    def validate_topping(topping):  # utility function
        valid = ["mushrooms", "olives", "pepperoni", "mozzarella"]
        return topping.lower() in valid

print(Pizza.validate_topping("Olives"))  # True

When to Use Each

  • Instance method — when we need to read or modify the object’s state (self.something)
  • Class method — when we need the class itself (factory methods, alternative constructors)
  • Static method — when the logic is related to the class but doesn’t need the instance or class reference. It’s basically a namespaced utility function

A good rule of thumb: start with instance methods. Only reach for @classmethod or @staticmethod when we genuinely don’t need access to the instance or want an alternative constructor.

In simple language, instance methods know about the object, class methods know about the class, and static methods know about neither — they’re just functions wearing the class’s uniform.


19

Property Decorators

intermediate property getter setter encapsulation

In languages like Java, we write getName() and setName() methods to control access to attributes. Python says: forget that noise. We have @property — it lets us use regular attribute syntax while running custom logic behind the scenes.

The Problem

Say we want to validate an age before setting it. Without properties, we’d need getter/setter methods and everyone would have to remember to call them.

# The Java way — not Pythonic
class Person:
    def get_age(self):
        return self._age

    def set_age(self, value):
        if value < 0:
            raise ValueError("Age can't be negative")
        self._age = value

@property to the Rescue

With @property, we access age like a normal attribute, but Python secretly calls our getter/setter.

class Person:
    def __init__(self, name, age):
        self.name = name
        self.age = age  # this calls the setter!

    @property
    def age(self):  # getter
        return self._age

    @age.setter
    def age(self, value):  # setter
        if value < 0:
            raise ValueError("Age can't be negative")
        self._age = value

p = Person("Manish", 25)
print(p.age)      # 25 — calls the getter
p.age = 30        # calls the setter
p.age = -5        # ValueError: Age can't be negative

Notice we store the actual value in self._age (with underscore) but expose it as self.age. The underscore is a convention meaning “private, don’t touch directly.”

Read-Only Properties

If we only define the @property getter and skip the setter, the attribute becomes read-only.

class Circle:
    def __init__(self, radius):
        self._radius = radius

    @property
    def area(self):  # computed, read-only
        return 3.14159 * self._radius ** 2

c = Circle(5)
print(c.area)   # 78.53975
c.area = 100    # AttributeError: can't set attribute

This is great for computed properties — values derived from other attributes.

The Deleter

Less common, but we can also define what happens when someone uses del.

class Person:
    def __init__(self, name):
        self._name = name

    @property
    def name(self):
        return self._name

    @name.setter
    def name(self, value):
        self._name = value

    @name.deleter
    def name(self):
        print("Deleting name...")
        self._name = None

p = Person("Manish")
del p.name  # Deleting name...

Validation in Setters

Properties really shine when we need to enforce rules. We can keep all validation in one place.

class Temperature:
    def __init__(self, celsius):
        self.celsius = celsius  # triggers setter

    @property
    def celsius(self):
        return self._celsius

    @celsius.setter
    def celsius(self, value):
        if value < -273.15:
            raise ValueError("Below absolute zero!")
        self._celsius = value

    @property
    def fahrenheit(self):  # computed from celsius
        return self._celsius * 9/5 + 32

In simple language, @property lets us add logic to attribute access without changing how the outside world uses our class. No ugly get_x() calls — just clean obj.x syntax with validation and computation happening under the hood.


20

Abstract Classes and Interfaces

intermediate ABC abstract interfaces contracts

An abstract class is a class that can’t be instantiated on its own — it exists only to be a base for other classes. Think of it like a contract: “if you inherit from me, you must implement these methods.”

Why Do We Need Them?

Say we’re building a payment system. We want every payment processor to have a process() method. Without enforcement, someone might forget and we’d only find out at runtime.

# Without ABC — no enforcement
class PaymentProcessor:
    def process(self, amount):
        raise NotImplementedError  # only catches at runtime

class StripeProcessor(PaymentProcessor):
    pass  # forgot to implement process() — no error until we call it

Using ABC

The abc module gives us proper enforcement. If a subclass doesn’t implement the required methods, Python raises an error at instantiation time, not when we call the method.

from abc import ABC, abstractmethod

class PaymentProcessor(ABC):
    @abstractmethod
    def process(self, amount):
        """Process a payment of the given amount."""
        pass

    @abstractmethod
    def refund(self, transaction_id):
        """Refund a transaction."""
        pass

    def log(self, message):  # concrete method — inherited as-is
        print(f"[Payment] {message}")

Now let’s try to use it:

# This fails immediately
p = PaymentProcessor()  # TypeError: Can't instantiate abstract class

# This also fails — we didn't implement refund()
class BadProcessor(PaymentProcessor):
    def process(self, amount):
        print(f"Processing ${amount}")

bp = BadProcessor()  # TypeError: Can't instantiate abstract class

We must implement all abstract methods:

class StripeProcessor(PaymentProcessor):
    def process(self, amount):
        print(f"Stripe: charging ${amount}")

    def refund(self, transaction_id):
        print(f"Stripe: refunding {transaction_id}")

sp = StripeProcessor()  # works!
sp.log("Payment received")  # [Payment] Payment received — inherited

Abstract Properties

We can also make properties abstract.

class Shape(ABC):
    @property
    @abstractmethod
    def area(self):
        pass

class Circle(Shape):
    def __init__(self, radius):
        self.radius = radius

    @property
    def area(self):  # must implement as a property
        return 3.14159 * self.radius ** 2

Duck Typing vs ABC

Python is famous for duck typing — “if it walks like a duck and quacks like a duck, it’s a duck.” We don’t need formal interfaces most of the time. Just call the method and trust it’s there.

ABCs are for when we want stricter guarantees — like framework code, plugin systems, or team projects where we need to enforce a contract.

Protocol (Structural Typing)

Python 3.8 introduced Protocol as a lighter alternative. It’s like an interface that doesn’t require inheritance — it just checks if the methods exist.

from typing import Protocol

class Drawable(Protocol):
    def draw(self) -> None: ...

def render(shape: Drawable):  # any object with draw() works
    shape.draw()

The difference: ABCs require explicit inheritance (class Foo(MyABC)). Protocols just check the shape of the object — no inheritance needed.

In simple language, ABCs let us say “you must have these methods” and Python enforces it the moment we try to create an object. Use them when duck typing isn’t strict enough.


21

Dataclasses

intermediate dataclasses data-class namedtuple

Ever written a class where __init__ just assigns a bunch of attributes, then added __repr__ and __eq__ by hand? That’s boilerplate. The @dataclass decorator (Python 3.7+) generates all of it for us.

Before vs After

# Without dataclass — lots of boilerplate
class Point:
    def __init__(self, x, y):
        self.x = x
        self.y = y
    def __repr__(self):
        return f"Point(x={self.x}, y={self.y})"
    def __eq__(self, other):
        return self.x == other.x and self.y == other.y
# With dataclass — same thing, 3 lines
from dataclasses import dataclass

@dataclass
class Point:
    x: float
    y: float

That’s it. We get __init__, __repr__, and __eq__ for free.

Default Values

We can set defaults just like function arguments. The only rule: fields with defaults must come after fields without.

@dataclass
class User:
    name: str
    email: str
    active: bool = True  # default value
    role: str = "viewer"

u = User("Manish", "manish@example.com")
print(u)  # User(name='Manish', email='manish@example.com', active=True, role='viewer')

field() and default_factory

For mutable defaults (lists, dicts), we can’t just use = [] — that’s the same shared list gotcha as function defaults. We use field(default_factory=...) instead.

from dataclasses import dataclass, field

@dataclass
class Team:
    name: str
    members: list = field(default_factory=list)  # new list for each instance
    metadata: dict = field(default_factory=dict)

The field() function also lets us exclude fields from repr or comparison:

@dataclass
class User:
    name: str
    password: str = field(repr=False)  # hidden in __repr__
    _internal: int = field(default=0, compare=False)  # ignored in __eq__

Frozen Dataclasses (Immutable)

Adding frozen=True makes instances read-only. Any attempt to change an attribute raises FrozenInstanceError.

@dataclass(frozen=True)
class Config:
    host: str
    port: int

c = Config("localhost", 8080)
c.port = 9090  # FrozenInstanceError!

Frozen dataclasses are also hashable by default, so we can use them in sets and as dict keys.

__post_init__

If we need custom logic after initialization, we define __post_init__. Python calls it right after the auto-generated __init__.

@dataclass
class Rectangle:
    width: float
    height: float
    area: float = field(init=False)  # not passed to __init__

    def __post_init__(self):
        self.area = self.width * self.height

r = Rectangle(3, 4)
print(r.area)  # 12.0

slots=True (Python 3.10+)

Adding slots=True generates __slots__, making instances use less memory and have faster attribute access.

@dataclass(slots=True)
class Point:
    x: float
    y: float

Dataclass vs NamedTuple vs Dict

  • dict — use when the structure is dynamic or we’re just passing data around loosely
  • NamedTuple — immutable, lightweight, works as a tuple (can unpack, index). Great for simple records
  • dataclass — mutable by default, supports methods, default factories, inheritance. Best for structured objects
from typing import NamedTuple

class PointNT(NamedTuple):  # immutable, tuple-like
    x: float
    y: float

p = PointNT(1, 2)
print(p[0])  # 1 — works like a tuple

In simple language, @dataclass kills the boilerplate. We just declare the fields and Python generates the boring methods for us. Use it whenever we’re building a class that’s mainly about holding data.


Scope & Memory

22

LEGB Scope Rule

beginner scope LEGB namespace global local

When we use a variable name, Python needs to figure out what it refers to. It searches through four scopes in a specific order: L → E → G → B. The first match wins.

B — Built-in Scope
print, len, range, int, str...
G — Global Scope
x = "global"
E — Enclosing Scope
y = "enclosing"
L — Local Scope
z = "local"
lookup: L → E → G → B

Local Scope

Variables created inside a function are local. They exist only while the function runs and can’t be accessed from outside.

def greet():
    message = "hello"  # local to greet()
    print(message)

greet()          # hello
print(message)   # NameError: name 'message' is not defined

Enclosing Scope

When we have nested functions, the inner function can access variables from the outer (enclosing) function.

def outer():
    name = "Manish"       # enclosing scope for inner()

    def inner():
        print(name)       # found in enclosing scope

    inner()

outer()  # Manish

Global Scope

Variables defined at the module level (outside any function). Accessible everywhere in the file.

counter = 0  # global

def increment():
    global counter   # tell Python we mean the global one
    counter += 1

increment()
print(counter)  # 1

Without the global keyword, Python would treat counter as a new local variable and throw an UnboundLocalError when we try to += it.

Built-in Scope

This is where Python’s built-in functions live — print, len, range, int, str, etc. We can technically override them (please don’t).

# Don't do this — but it shows how scope works
print = "oops"    # shadows the built-in print
print("hello")    # TypeError: 'str' object is not callable

The global Keyword

Lets us modify a global variable from inside a function. Without it, assigning to a variable inside a function creates a new local one.

x = 10

def change():
    global x
    x = 20

change()
print(x)  # 20

The nonlocal Keyword

Same idea, but for enclosing scope. Lets an inner function modify a variable from the outer function.

def counter():
    count = 0

    def increment():
        nonlocal count  # modify the enclosing variable
        count += 1
        return count

    return increment

c = counter()
print(c())  # 1
print(c())  # 2

In simple language, when Python sees a variable name, it checks four places in order: the current function, any enclosing functions, the module level, and finally the built-ins. First match wins. Use global and nonlocal when we need to write to an outer scope — but use them sparingly.


23

Shallow vs Deep Copy

intermediate copy deepcopy references shallow-copy

In Python, variables don’t hold values directly — they hold references (pointers) to objects in memory. This means copying isn’t always what we expect.

Shallow Copy
original
copy
↓           ↓
[1, 2]
new list
[1, 2]
new list
↓           ↓
[3, 4]
nested list SHARED
Deep Copy
original
copy
↓           ↓
[1, 2]
[3, 4]
[1, 2]
[3, 4]
fully independent copies

Assignment (=) — Not a Copy at All

Assignment just creates another name pointing to the same object. No copying happens.

a = [1, 2, [3, 4]]
b = a               # b points to the SAME object
b.append(5)
print(a)  # [1, 2, [3, 4], 5] — a is affected too!
print(id(a) == id(b))  # True — same object in memory

Shallow Copy

Creates a new outer object, but the nested objects inside still share the same references.

import copy

a = [1, 2, [3, 4]]
b = copy.copy(a)   # shallow copy

# Other ways to shallow copy:
# b = a[:]         — slice notation
# b = list(a)      — constructor
# b = a.copy()     — list's built-in method

b.append(5)
print(a)  # [1, 2, [3, 4]] — outer list is independent

b[2].append(99)
print(a)  # [1, 2, [3, 4, 99]] — nested list is SHARED!

This is where shallow copies break. The top-level list is new, but a[2] and b[2] still point to the exact same [3, 4] list.

Deep Copy

Creates a completely independent copy — every nested object is recursively duplicated.

import copy

a = [1, 2, [3, 4]]
b = copy.deepcopy(a)

b[2].append(99)
print(a)  # [1, 2, [3, 4]] — completely unaffected
print(b)  # [1, 2, [3, 4, 99]]

Verifying with id()

We can use id() to check if two variables point to the same object.

import copy

a = [1, 2, [3, 4]]
b = copy.copy(a)
c = copy.deepcopy(a)

print(id(a) == id(b))       # False — different outer lists
print(id(a[2]) == id(b[2])) # True — same nested list (shallow!)
print(id(a[2]) == id(c[2])) # False — different nested lists (deep!)

Quick Reference

OperationNew outer object?New nested objects?
b = aNoNo
b = a.copy()YesNo
b = copy.deepcopy(a)YesYes

In simple language, assignment just creates another label for the same box. Shallow copy gives us a new box but the items inside are still shared. Deep copy gives us a completely new box with brand new items — nothing is shared.


24

Garbage Collection and Reference Counting

intermediate garbage-collection reference-counting memory gc

Python manages memory automatically. We create objects, use them, and Python cleans them up when they’re no longer needed. The primary mechanism is reference counting, backed by a cyclic garbage collector for edge cases.

Reference Counting

Every object in Python has a reference count — the number of names or containers pointing to it. When that count drops to zero, Python immediately frees the memory.

Reference Counting in Action
a = [1, 2]
[1, 2] refs: 1
b = a
[1, 2] refs: 2
del a
[1, 2] refs: 1
del b
[1, 2] refs: 0 → freed!

We can check an object’s reference count with sys.getrefcount():

import sys

a = [1, 2, 3]
print(sys.getrefcount(a))  # 2 (one for 'a', one for the argument to getrefcount)

b = a
print(sys.getrefcount(a))  # 3

del b
print(sys.getrefcount(a))  # 2 again

Note: getrefcount() always shows one extra because passing the object to the function temporarily creates another reference.

What Increases the Count?

  • Assigning to a variable: a = obj
  • Adding to a container: my_list.append(obj)
  • Passing as a function argument
  • Creating an alias: b = a

What decreases it: del a, reassigning a = something_else, or the variable going out of scope.

The Circular Reference Problem

Reference counting alone can’t handle this:

class Node:
    def __init__(self):
        self.ref = None

a = Node()
b = Node()
a.ref = b   # a points to b
b.ref = a   # b points to a — circular!

del a
del b
# Both refcounts are still 1 (they reference each other)
# Reference counting alone can't free them!

The Generational Garbage Collector

Python’s GC handles circular references using a generational approach. Objects are grouped into three generations:

  • Generation 0 — newly created objects. Collected most frequently.
  • Generation 1 — survived one collection cycle.
  • Generation 2 — long-lived objects. Collected least frequently.

The idea: most objects die young. By checking new objects more often, we save time.

import gc

print(gc.get_count())       # (num_gen0, num_gen1, num_gen2)
print(gc.get_threshold())   # (700, 10, 10) — default thresholds

gc.collect()  # manually trigger a full collection

The GC runs automatically when the number of allocations minus deallocations in gen 0 exceeds the threshold (default 700).

__del__ Destructor

We can define __del__ to run cleanup code when an object is about to be destroyed. But it’s rarely used and can cause issues with the garbage collector.

class TempFile:
    def __init__(self, name):
        self.name = name
        print(f"Created {name}")

    def __del__(self):
        print(f"Deleting {self.name}")

f = TempFile("data.tmp")
del f  # Deleting data.tmp

Prefer context managers (with statement) over __del__ for cleanup — they’re more predictable.

Weak References

Sometimes we want to reference an object without preventing it from being garbage collected. That’s what weakref is for.

import weakref

class BigData:
    pass

obj = BigData()
weak = weakref.ref(obj)

print(weak())   # <__main__.BigData object ...>
del obj
print(weak())   # None — object was collected

In simple language, Python uses reference counting as its main memory strategy — when nothing points to an object anymore, it’s immediately freed. For tricky cases like circular references, the generational garbage collector steps in and cleans up periodically.


25

Global Interpreter Lock (GIL)

advanced GIL threading CPython concurrency

The GIL is a mutex (a lock) in CPython that allows only one thread to execute Python bytecode at a time. Even on a 16-core machine, only one thread runs Python code at any given moment.

This is one of the most misunderstood parts of Python. Let’s break it down.

Why Does the GIL Exist?

Remember reference counting from the previous note? Every object has a reference count. If two threads modify that count simultaneously without a lock, we get a race condition — the count could go wrong, leading to memory leaks or crashes.

The GIL is the simplest solution: one big lock around the entire interpreter. Only one thread can touch Python objects at a time, so reference counting stays safe.

GIL Thread Switching Timeline
Thread A
running
waiting
running
...
Thread B
waiting
running
waiting
...
GIL
held by A
held by B
held by A
B
Threads take turns — only one executes Python bytecode at a time

CPU-Bound vs I/O-Bound

The GIL’s impact depends on what kind of work we’re doing:

CPU-bound (number crunching, image processing) — the GIL hurts. Threads can’t run in parallel. Adding more threads might even make things slower due to GIL contention.

import threading, time

def count():
    total = 0
    for i in range(50_000_000):
        total += 1

# Single-threaded
start = time.time()
count()
count()
print(f"Sequential: {time.time() - start:.2f}s")

# Multi-threaded — NOT faster because of GIL!
start = time.time()
t1 = threading.Thread(target=count)
t2 = threading.Thread(target=count)
t1.start(); t2.start()
t1.join(); t2.join()
print(f"Threaded: {time.time() - start:.2f}s")  # about the same or slower

I/O-bound (network requests, file reads, database queries) — the GIL is fine. Python releases the GIL while waiting for I/O, so other threads can run.

import threading, time, urllib.request

def fetch(url):
    urllib.request.urlopen(url)

# Threads help here — GIL released during network wait
urls = ["https://python.org"] * 5
threads = [threading.Thread(target=fetch, args=(u,)) for u in urls]
for t in threads: t.start()
for t in threads: t.join()

Workarounds for CPU-Bound Work

1. multiprocessing — Separate Processes

Each process gets its own Python interpreter and GIL. True parallelism.

from multiprocessing import Pool

def heavy_work(n):
    return sum(range(n))

with Pool(4) as pool:                    # 4 separate processes
    results = pool.map(heavy_work, [10**7] * 4)

2. C Extensions

Libraries like NumPy release the GIL when doing heavy computation in C. That’s why NumPy operations are fast even with threads.

3. asyncio for I/O

For I/O-bound work, asyncio is often better than threads — lighter weight, no GIL worries.

Python 3.13: Free-Threaded Mode

Python 3.13 introduced an experimental free-threaded mode (PEP 703) that removes the GIL entirely. It’s opt-in and not production-ready yet, but it’s a big step toward true multithreading in Python.

# Build CPython with --disable-gil (experimental)
python3.13t  # the free-threaded build

This is still evolving, but it signals that the GIL’s days may be numbered.

In simple language, the GIL is a lock that prevents multiple threads from running Python code simultaneously. It exists to keep reference counting safe. For I/O-bound work, it’s not a problem. For CPU-bound work, use multiprocessing to sidestep it entirely.


Error Handling & Context

26

Exception Handling

beginner exceptions try-except error-handling finally

When something goes wrong in Python, it raises an exception. Instead of letting our program crash, we can catch that exception and handle it gracefully. That’s what try/except is for.

The Basics: try/except

We wrap the risky code in a try block and handle the error in except.

try:
    result = 10 / 0
except ZeroDivisionError:
    print("Can't divide by zero!")  # this runs instead of crashing

Catching Specific Exceptions

We should always catch specific exceptions. This way we know exactly what went wrong.

try:
    num = int("hello")
except ValueError:
    print("That's not a valid number!")

We can have multiple except blocks for different error types:

try:
    data = {"name": "Manish"}
    print(data["age"])
except KeyError:
    print("Key doesn't exist")
except TypeError:
    print("Wrong type used")

The else Clause

The else block runs only when no exception occurred. Think of it as the “happy path” code.

try:
    num = int("42")
except ValueError:
    print("Invalid number")
else:
    print(f"Parsed successfully: {num}")  # runs because no error happened

The finally Clause

finally always runs — whether an exception happened or not. It’s perfect for cleanup tasks like closing files or database connections.

try:
    f = open("data.txt")
    content = f.read()
except FileNotFoundError:
    print("File not found!")
finally:
    print("This always runs — cleanup happens here")

Raising Exceptions

We can raise our own exceptions using the raise keyword. This is useful when we want to signal that something is wrong from our own code.

def set_age(age):
    if age < 0:
        raise ValueError("Age can't be negative")
    return age

try:
    set_age(-5)
except ValueError as e:
    print(e)  # "Age can't be negative"

Common Built-in Exceptions

Here are the ones we’ll see most often:

  • ValueError — right type, wrong value (e.g., int("hello"))
  • TypeError — wrong type (e.g., "2" + 2)
  • KeyError — key not found in a dict
  • IndexError — list index out of range
  • FileNotFoundError — file doesn’t exist
  • AttributeError — object doesn’t have that attribute
  • ZeroDivisionError — dividing by zero

The Bare except Anti-pattern

Using except without specifying an exception type catches everything — including things like KeyboardInterrupt and SystemExit. This is almost always a bad idea because it hides bugs.

# Bad — don't do this
try:
    something()
except:
    pass  # silently swallows ALL errors, even Ctrl+C

# Good — catch what we expect
try:
    something()
except (ValueError, TypeError) as e:
    print(f"Handled: {e}")

In simple language, exception handling is our safety net. We wrap risky code in try, catch known problems with except, run cleanup with finally, and celebrate the happy path with else.


27

Custom Exceptions

intermediate exceptions custom-exceptions error-classes

Python’s built-in exceptions like ValueError and TypeError are great, but sometimes we need errors that are specific to our application. That’s where custom exceptions come in.

Why Create Custom Exceptions?

Imagine we’re building a payment system. When a payment fails, raising a generic ValueError doesn’t tell us much. But a PaymentFailedError with the transaction ID? Now we’re talking.

Custom exceptions let us:

  • Give meaningful names to errors in our domain
  • Attach extra data (like error codes or context)
  • Let callers catch our specific errors without catching unrelated ones

The Basics

A custom exception is just a class that inherits from Exception. That’s it.

class PaymentFailedError(Exception):
    pass

# Using it
raise PaymentFailedError("Insufficient funds")

Important: Always inherit from Exception, not BaseException. The BaseException class includes things like KeyboardInterrupt and SystemExit — we don’t want to accidentally catch those.

Adding Custom Attributes

The real power comes when we attach extra information to our exceptions.

class PaymentFailedError(Exception):
    def __init__(self, message, transaction_id=None, error_code=None):
        super().__init__(message)
        self.transaction_id = transaction_id
        self.error_code = error_code

# Now we can catch it and access the details
try:
    raise PaymentFailedError(
        "Card declined",
        transaction_id="txn_abc123",
        error_code=402
    )
except PaymentFailedError as e:
    print(f"Payment error: {e}")                    # Card declined
    print(f"Transaction: {e.transaction_id}")       # txn_abc123
    print(f"Code: {e.error_code}")                  # 402

Building Exception Hierarchies

For larger apps, we create a base exception for our project and build specific ones on top. This way, callers can catch all our errors with the base class, or specific ones when needed.

class AppError(Exception):
    """Base exception for our application."""
    pass

class AuthError(AppError):
    """Authentication-related errors."""
    pass

class NotFoundError(AppError):
    """Resource not found."""
    pass

class PermissionDeniedError(AuthError):
    """User doesn't have permission."""
    pass

Now we can catch broadly or narrowly:

try:
    authenticate(user)
except PermissionDeniedError:
    print("No permission")       # catches only permission issues
except AuthError:
    print("Auth problem")        # catches all auth issues
except AppError:
    print("Something went wrong") # catches all our app errors

Customizing str

We can control how our exception looks when printed by overriding __str__.

class ValidationError(Exception):
    def __init__(self, field, message):
        self.field = field
        self.message = message

    def __str__(self):
        return f"[{self.field}] {self.message}"

raise ValidationError("email", "Invalid format")
# Output: [email] Invalid format

Best Practices

  • Name ends with ErrorPaymentFailedError, not PaymentFailed or PaymentException
  • Inherit from Exception — never from BaseException
  • Keep a base class for our app — makes catching everything easy
  • Don’t go overboard — one custom exception per meaningful error scenario, not per function
  • Always call super().__init__() — so the default message behavior works

In simple language, custom exceptions are our way of speaking the language of our application. Instead of generic “something went wrong” errors, we get descriptive, catchable, data-rich error types that make debugging a breeze.


28

Context Managers

intermediate context-manager with enter-exit contextlib

A context manager is an object that sets something up and guarantees it gets cleaned up — no matter what happens in between. Think of it like a hotel check-in: we arrive (__enter__), stay and do our thing, and the hotel ensures checkout happens (__exit__) even if there’s a fire alarm.

The Problem Context Managers Solve

Without context managers, we have to remember to clean up resources manually. And if an exception happens before cleanup, we’re in trouble.

# Without context manager — risky
f = open("data.txt")
content = f.read()
f.close()  # what if an error happens before this line?

With a context manager, cleanup is guaranteed:

# With context manager — safe
with open("data.txt") as f:
    content = f.read()
# f.close() happens automatically, even if an error occurs

How It Works: enter and exit

The with statement calls two special methods on the object:

  1. __enter__() — runs at the start, returns something we can use (the as variable)
  2. __exit__() — runs at the end, handles cleanup and any exceptions
with MyManager() as obj:
Step 1 __enter__() called → returns obj
Step 2 Our code block runs (the indented body)
If an exception happens here → still goes to Step 3
↓ always runs (even on exception)
Step 3 __exit__(exc_type, exc_val, exc_tb) called → cleanup
If __exit__ returns True → exception is suppressed. Otherwise → re-raised.

Writing a Class-Based Context Manager

We just need a class with __enter__ and __exit__ methods.

class Timer:
    def __enter__(self):
        import time
        self.start = time.time()
        return self  # this becomes the 'as' variable

    def __exit__(self, exc_type, exc_val, exc_tb):
        import time
        elapsed = time.time() - self.start
        print(f"Took {elapsed:.2f} seconds")
        return False  # don't suppress exceptions

with Timer():
    total = sum(range(1_000_000))
# Prints: Took 0.03 seconds

The __exit__ method receives three arguments about any exception that occurred. If no exception, all three are None.

The @contextmanager Decorator

Writing a whole class just for setup/teardown can feel heavy. The contextlib module gives us a decorator that turns a generator function into a context manager.

from contextlib import contextmanager

@contextmanager
def timer():
    import time
    start = time.time()
    yield  # everything before yield = __enter__, after = __exit__
    elapsed = time.time() - start
    print(f"Took {elapsed:.2f} seconds")

with timer():
    total = sum(range(1_000_000))

Everything before yield is the setup (__enter__). Everything after yield is the cleanup (__exit__). If we need to return a value, we yield it.

@contextmanager
def open_db():
    conn = create_connection()
    try:
        yield conn        # caller gets the connection
    finally:
        conn.close()      # cleanup always happens

Common Uses

Context managers pop up everywhere in Python:

  • File handlingwith open(...) as f
  • Database connectionswith db.connect() as conn
  • Lockswith threading.Lock()
  • Temporary directorieswith tempfile.TemporaryDirectory() as d
  • Suppressing exceptionswith contextlib.suppress(FileNotFoundError)

Nested with Statements

We can nest them or use a single with for multiple managers:

# Both are equivalent
with open("input.txt") as src, open("output.txt", "w") as dst:
    dst.write(src.read())

In simple language, context managers are Python’s way of saying “I’ll handle the cleanup, no matter what.” We just focus on the work inside the with block, and Python takes care of the rest.


Concurrency & Async

29

Threading vs Multiprocessing

intermediate threading multiprocessing concurrency parallelism

Python gives us two ways to run things at the same time: threads (concurrency) and processes (parallelism). They sound similar, but they work very differently under the hood.

Concurrency vs Parallelism

  • Concurrency — multiple tasks making progress by switching between them (like juggling)
  • Parallelism — multiple tasks literally running at the same time on different CPU cores

Threads give us concurrency. Processes give us true parallelism.

Threading (Shared Memory)
One Process
Thread 1 Thread 2 Thread 3
↕ all share the same memory
Shared Variables, GIL Lock
Good for: I/O-bound tasks (network, files)
Multiprocessing (Separate Memory)
Process 1
own memory
own GIL
Process 2
own memory
own GIL
Process 3
own memory
own GIL
Good for: CPU-bound tasks (math, processing)

The GIL Problem

Python has a Global Interpreter Lock (GIL) — a mutex that lets only one thread execute Python bytecode at a time. This means threads can’t truly run Python code in parallel.

So why use threads at all? Because when a thread is waiting for I/O (network response, file read, database query), it releases the GIL. Other threads can run during that wait time. That’s why threads are great for I/O-bound work.

For CPU-heavy work (number crunching, image processing), threads don’t help because the GIL blocks parallel execution. That’s when we reach for multiprocessing — each process has its own GIL.

Threading Basics

import threading
import time

def download(url):
    print(f"Downloading {url}...")
    time.sleep(2)  # simulating network I/O
    print(f"Done: {url}")

# Create and start threads
t1 = threading.Thread(target=download, args=("page1.html",))
t2 = threading.Thread(target=download, args=("page2.html",))
t1.start()
t2.start()

# Wait for both to finish
t1.join()
t2.join()
print("All downloads complete")  # takes ~2s, not ~4s

Multiprocessing Basics

import multiprocessing

def crunch_numbers(n):
    return sum(i * i for i in range(n))

if __name__ == "__main__":
    p1 = multiprocessing.Process(target=crunch_numbers, args=(10_000_000,))
    p2 = multiprocessing.Process(target=crunch_numbers, args=(10_000_000,))
    p1.start()
    p2.start()
    p1.join()
    p2.join()

The if __name__ == "__main__" guard is required for multiprocessing on some platforms (especially Windows and macOS) to prevent infinite process spawning.

Sharing Data Between Processes

Since processes have separate memory, we use Queue or Pipe to communicate.

from multiprocessing import Process, Queue

def worker(q, data):
    result = sum(data)
    q.put(result)  # send result back

if __name__ == "__main__":
    q = Queue()
    p = Process(target=worker, args=(q, [1, 2, 3, 4, 5]))
    p.start()
    result = q.get()  # blocks until result is available
    p.join()
    print(result)  # 15

When to Use Which

ScenarioUseWhy
Downloading filesThreadingI/O-bound, threads release GIL during waits
API callsThreadingNetwork I/O, same reason
Image processingMultiprocessingCPU-bound, needs true parallelism
Data crunchingMultiprocessingCPU-bound, bypasses the GIL
Simple scriptingNeitherKeep it simple until we need speed

In simple language, threads are like one chef switching between tasks in one kitchen. Processes are like multiple chefs, each with their own kitchen. Threads share everything (fast but tricky), processes are isolated (safe but heavier).


30

Asyncio and async/await

advanced asyncio async await coroutines event-loop

Asyncio is Python’s built-in framework for writing asynchronous code — code that can pause while waiting for something (like a network response) and let other code run in the meantime. It’s single-threaded, but incredibly efficient for I/O-bound tasks.

Coroutines: The Building Blocks

A coroutine is a function defined with async def. It doesn’t run when we call it — it returns a coroutine object that we need to await.

import asyncio

async def greet(name):
    print(f"Hello, {name}!")
    await asyncio.sleep(1)  # non-blocking pause
    print(f"Goodbye, {name}!")

# This is how we run it
asyncio.run(greet("Manish"))

The await keyword is where the magic happens. When Python hits await, it pauses that coroutine and goes to do other work. When the awaited thing is done, it comes back and continues.

The Event Loop

The event loop is the heart of asyncio. It keeps track of all running coroutines, figures out which ones are ready to continue, and switches between them.

Event Loop
Task A
fetch(url_1)
Task B
fetch(url_2)
Task C
read_db()
1. Run Task A until it hits await → pauses A
2. Switch to Task B until it hits await → pauses B
3. Switch to Task C until it hits await → pauses C
4. Task A's I/O is done → resume A, and keep cycling...
All on a single thread — no parallelism, just smart switching

Running Multiple Tasks with gather()

The real power of asyncio is running many things concurrently. asyncio.gather() runs multiple coroutines at the same time and waits for all of them.

import asyncio

async def fetch(url):
    print(f"Fetching {url}...")
    await asyncio.sleep(2)  # simulating network delay
    return f"Data from {url}"

async def main():
    # These run concurrently, not one after another
    results = await asyncio.gather(
        fetch("api.com/users"),
        fetch("api.com/posts"),
        fetch("api.com/comments"),
    )
    print(results)  # all three results, took ~2s total

asyncio.run(main())

Without gather, three sequential fetches would take ~6 seconds. With it, they overlap and take ~2 seconds.

Creating Tasks

asyncio.create_task() schedules a coroutine to run in the background. We can do other things while the task runs.

async def main():
    task = asyncio.create_task(fetch("api.com/data"))

    # do other stuff while task runs in background
    print("Doing other work...")
    await asyncio.sleep(1)

    # now get the result
    result = await task
    print(result)

The difference: await fetch(...) runs it and waits. create_task(fetch(...)) starts it in the background — we await it later when we need the result.

Real-World Use: aiohttp

asyncio.sleep() is great for learning, but in practice we use async-compatible libraries. aiohttp is the go-to for HTTP requests.

import aiohttp
import asyncio

async def fetch(session, url):
    async with session.get(url) as response:
        return await response.json()

async def main():
    async with aiohttp.ClientSession() as session:
        data = await asyncio.gather(
            fetch(session, "https://api.example.com/users"),
            fetch(session, "https://api.example.com/posts"),
        )
        print(data)

asyncio.run(main())

asyncio vs threading

Featureasynciothreading
ThreadsSingle threadMultiple threads
SwitchingCooperative (at await)Preemptive (OS decides)
Race conditionsRare (explicit yield points)Common (need locks)
Best forMany I/O tasks (1000+ connections)Fewer I/O tasks, simpler code
Learning curveSteeperGentler

In simple language, asyncio is like a really efficient waiter at a restaurant. Instead of standing at one table waiting for the kitchen, the waiter takes orders from all tables and brings food as it’s ready — all by themselves, no extra waiters needed.


31

Concurrent.futures

advanced concurrent futures thread-pool process-pool

The concurrent.futures module is Python’s high-level, “batteries included” way to run tasks in parallel. Instead of manually creating threads or processes, we use executor pools that manage everything for us.

Think of it like a task queue with a pool of workers. We submit jobs, and the pool assigns them to available workers.

ThreadPoolExecutor

This creates a pool of threads. Perfect for I/O-bound tasks — downloading files, making API calls, reading from databases.

from concurrent.futures import ThreadPoolExecutor
import time

def download(url):
    time.sleep(2)  # simulating network I/O
    return f"Downloaded {url}"

# Pool of 3 threads handling 5 tasks
with ThreadPoolExecutor(max_workers=3) as executor:
    urls = ["page1", "page2", "page3", "page4", "page5"]
    results = executor.map(download, urls)
    for result in results:
        print(result)  # takes ~4s total (2 batches), not ~10s

The with statement ensures the pool shuts down cleanly when we’re done. No need to manually join threads.

ProcessPoolExecutor

Same API, but uses processes instead of threads. Perfect for CPU-bound work — number crunching, image processing, data transformation.

from concurrent.futures import ProcessPoolExecutor

def crunch(n):
    return sum(i * i for i in range(n))

if __name__ == "__main__":
    with ProcessPoolExecutor(max_workers=4) as executor:
        numbers = [10_000_000, 20_000_000, 30_000_000]
        results = executor.map(crunch, numbers)
        for result in results:
            print(result)

The only difference is we swap ThreadPoolExecutor for ProcessPoolExecutor. The rest of the code stays the same. That’s the beauty of this module.

submit() and Future Objects

map() is great for bulk operations, but submit() gives us more control. It returns a Future object — a promise that a result will be available later.

from concurrent.futures import ThreadPoolExecutor

def fetch(url):
    import time
    time.sleep(1)
    return f"Data from {url}"

with ThreadPoolExecutor(max_workers=3) as executor:
    future = executor.submit(fetch, "api.com/users")

    # We can do other stuff here while it's running
    print("Working on other things...")

    # Now get the result (blocks until ready)
    result = future.result()
    print(result)

A Future has some handy methods:

  • result() — blocks and returns the result (or raises the exception)
  • done() — returns True if the task has finished
  • cancel() — tries to cancel the task (only works if it hasn’t started)
  • exception() — returns the exception if one occurred

as_completed(): Results As They Arrive

By default, map() returns results in the order we submitted them. But what if we want results as soon as they’re ready? That’s what as_completed() does.

from concurrent.futures import ThreadPoolExecutor, as_completed
import time

def fetch(url, delay):
    time.sleep(delay)
    return f"{url} (took {delay}s)"

with ThreadPoolExecutor(max_workers=3) as executor:
    futures = {
        executor.submit(fetch, "fast.com", 1): "fast",
        executor.submit(fetch, "slow.com", 3): "slow",
        executor.submit(fetch, "medium.com", 2): "medium",
    }

    # Results arrive in completion order, not submission order
    for future in as_completed(futures):
        tag = futures[future]
        print(f"{tag}: {future.result()}")
    # Output: fast, medium, slow (fastest first)

Error Handling

When a task raises an exception, it gets stored in the Future. Calling result() re-raises it.

from concurrent.futures import ThreadPoolExecutor

def risky_task(n):
    if n == 0:
        raise ValueError("Can't process zero!")
    return 100 / n

with ThreadPoolExecutor() as executor:
    futures = [executor.submit(risky_task, n) for n in [5, 0, 10]]

    for future in futures:
        try:
            print(future.result())
        except ValueError as e:
            print(f"Error: {e}")

When to Use This Over Raw threading/multiprocessing

  • Use concurrent.futures when we just need to parallelize a batch of similar tasks. It’s cleaner and handles the pool lifecycle for us.
  • Use raw threading when we need fine-grained control over threads (custom synchronization, daemon threads, etc.).
  • Use raw multiprocessing when we need shared memory, custom IPC, or complex process management.

In simple language, concurrent.futures is the “I just want to run a bunch of things faster” module. We don’t need to think about thread management, pool cleanup, or synchronization. We submit tasks, get results.


Advanced Python

32

Metaclasses

advanced metaclasses type meta-programming

In Python, everything is an object — including classes themselves. A metaclass is the “class of a class.” It’s what creates and configures class objects, the same way a class creates and configures instances.

In simple language, if a class is a blueprint for objects, then a metaclass is a blueprint for classes.

type() Is the Default Metaclass

Every class we write is secretly created by type. We can even see this:

class Dog:
    pass

print(type(Dog))       # <class 'type'>
print(type(Dog()))     # <class 'Dog'>

So Dog is an instance of type, and Dog() is an instance of Dog.

Creation chain
type
(metaclass)
MyMeta
(custom metaclass)
MyClass
(class)
obj
(instance)
type(obj) = MyClass | type(MyClass) = MyMeta | type(MyMeta) = type

Creating Classes Dynamically with type()

We can create classes on the fly using type(name, bases, dict):

# These two are equivalent
class Dog:
    sound = "woof"

Dog = type("Dog", (), {"sound": "woof"})

This is what Python does under the hood every time we write a class statement.

Writing a Custom Metaclass

A custom metaclass inherits from type and overrides __new__ or __init__ to customize class creation.

class ValidatedMeta(type):
    def __new__(mcs, name, bases, namespace):
        # Require all classes to have a 'version' attribute
        if "version" not in namespace:
            raise TypeError(f"{name} must define a 'version' attribute")
        return super().__new__(mcs, name, bases, namespace)

class MyPlugin(metaclass=ValidatedMeta):
    version = "1.0"  # this works

class BadPlugin(metaclass=ValidatedMeta):
    pass  # TypeError: BadPlugin must define a 'version' attribute

When Python sees metaclass=ValidatedMeta, it calls ValidatedMeta.__new__() instead of type.__new__() to create the class.

Practical Uses

Metaclasses are used in some well-known libraries:

  • Django ORMModel classes use metaclasses to turn field definitions into database schema
  • Abstract Base ClassesABCMeta enforces that subclasses implement required methods
  • Plugin registration — auto-register every subclass in a registry

Here’s a registration example:

class PluginMeta(type):
    registry = {}
    def __new__(mcs, name, bases, namespace):
        cls = super().__new__(mcs, name, bases, namespace)
        if bases:  # don't register the base class itself
            mcs.registry[name] = cls
        return cls

class Plugin(metaclass=PluginMeta):
    pass

class AuthPlugin(Plugin):
    pass

class CachePlugin(Plugin):
    pass

print(PluginMeta.registry)
# {'AuthPlugin': <class 'AuthPlugin'>, 'CachePlugin': <class 'CachePlugin'>}

The Simpler Alternative: init_subclass

Since Python 3.6, we have __init_subclass__() which handles most use cases without needing a metaclass.

class Plugin:
    registry = {}

    def __init_subclass__(cls, **kwargs):
        super().__init_subclass__(**kwargs)
        Plugin.registry[cls.__name__] = cls

class AuthPlugin(Plugin):
    pass

print(Plugin.registry)  # {'AuthPlugin': <class 'AuthPlugin'>}

Much simpler. Same result.

When NOT to Use Metaclasses

Metaclasses are powerful but rarely needed. Before reaching for one, consider:

  • __init_subclass__ — for subclass hooks (Python 3.6+)
  • Class decorators — for modifying a class after creation
  • Descriptors — for custom attribute behavior

The famous quote: “Metaclasses are deeper magic than 99% of users should ever worry about. If you wonder whether you need them, you don’t.” — Tim Peters

In simple language, metaclasses let us control how classes are built. They’re the factory that produces factories. Incredibly powerful, but for most of us, __init_subclass__ and class decorators will do the job.


33

__slots__

advanced slots memory optimization attributes

By default, every Python object stores its attributes in a dictionary (__dict__). This is flexible — we can add any attribute at any time — but it costs memory. When we have millions of instances of the same class, that memory adds up fast.

__slots__ tells Python: “These are the only attributes this class will ever have.” Python then uses a more compact internal structure instead of a dict.

How It Works

class Point:
    __slots__ = ("x", "y")

    def __init__(self, x, y):
        self.x = x
        self.y = y

p = Point(3, 4)
print(p.x)  # 3

p.z = 5  # AttributeError: 'Point' object has no attribute 'z'

No __dict__ is created. Attributes are stored in fixed-size slots, like a struct in C.

Why Use It?

Two main reasons:

  1. Memory savings — each instance uses significantly less memory (no per-instance dict)
  2. Faster attribute access — direct offset lookup instead of dict hash lookup

The savings matter when we have many instances:

import sys

class WithDict:
    def __init__(self, x, y):
        self.x = x
        self.y = y

class WithSlots:
    __slots__ = ("x", "y")
    def __init__(self, x, y):
        self.x = x
        self.y = y

a = WithDict(1, 2)
b = WithSlots(1, 2)
print(sys.getsizeof(a.__dict__))  # ~104 bytes (the dict itself)
print(sys.getsizeof(a))           # ~48 bytes (the object)
print(sys.getsizeof(b))           # ~48 bytes (no dict overhead at all)

With a million instances, the dict overhead alone can be hundreds of megabytes.

No Dynamic Attributes

The trade-off is clear: we lose the ability to add arbitrary attributes at runtime.

class User:
    __slots__ = ("name", "email")

    def __init__(self, name, email):
        self.name = name
        self.email = email

u = User("Manish", "m@example.com")
u.age = 25  # AttributeError — 'age' not in __slots__

If we need some flexibility, we can include __dict__ in our slots:

class FlexUser:
    __slots__ = ("name", "__dict__")  # fixed + dynamic attributes

    def __init__(self, name):
        self.name = name

u = FlexUser("Manish")
u.age = 25  # works — stored in __dict__

But this partially defeats the purpose.

Inheritance Caveats

Slots get tricky with inheritance:

  • If a parent has __slots__ and a child doesn’t define __slots__, the child gets a __dict__ (losing the benefit)
  • Both parent and child should define __slots__ for full savings
  • Don’t repeat parent slots in the child
class Base:
    __slots__ = ("x",)

class Child(Base):
    __slots__ = ("y",)  # only new attributes here

c = Child()
c.x = 1  # from Base's slots
c.y = 2  # from Child's slots

With Dataclasses (Python 3.10+)

Dataclasses support slots natively with the slots=True parameter. This is the cleanest way to use them.

from dataclasses import dataclass

@dataclass(slots=True)
class Point:
    x: float
    y: float

p = Point(3.0, 4.0)
print(p.x)  # 3.0
p.z = 5     # AttributeError

No need to define __slots__ manually — the dataclass decorator handles it.

When to Use slots

  • Many instances of the same class (data processing, game entities, ORM rows)
  • Known, fixed attributes that won’t change
  • Performance-critical code where memory or speed matters

When NOT to use:

  • Prototyping or small scripts (not worth the hassle)
  • Classes that need dynamic attributes
  • When we’re not creating many instances

In simple language, __slots__ is us telling Python “I know exactly what attributes this class needs — please be efficient about it.” We give up flexibility for speed and memory savings.


34

Type Hints and Annotations

intermediate type-hints typing annotations mypy

Python is dynamically typed — we don’t have to declare types. But starting with Python 3.5, we can optionally add type hints. They don’t change how the code runs. They’re notes for humans, IDEs, and type checkers like mypy.

Basic Type Hints

We add types to function parameters and return values with : and ->.

def greet(name: str) -> str:
    return f"Hello, {name}!"

def add(a: int, b: int) -> int:
    return a + b

age: int = 25
name: str = "Manish"
is_active: bool = True

If we pass wrong types, Python won’t stop us. Type hints are not enforced at runtime. But our IDE will highlight the mistake, and mypy will catch it.

Common Types

For simple types, we just use the built-in names:

  • int, float, str, bool — basic types
  • None — for functions that return nothing
  • bytes — for binary data
def process(data: str) -> None:
    print(data)  # returns None implicitly

Collections

For containers, Python 3.9+ lets us use built-in types directly. Before 3.9, we import from typing.

# Python 3.9+ (preferred)
def get_names() -> list[str]:
    return ["Alice", "Bob"]

scores: dict[str, int] = {"math": 95, "science": 88}
point: tuple[float, float] = (3.14, 2.71)

# Python 3.5-3.8 (use typing imports)
from typing import List, Dict, Tuple
def get_names() -> List[str]:
    return ["Alice", "Bob"]

Optional and Union

When a value could be one of several types, we use Union. When it could be a type or None, we use Optional.

from typing import Optional, Union

def find_user(user_id: int) -> Optional[str]:
    # Returns str or None
    if user_id == 1:
        return "Manish"
    return None

def parse(value: Union[str, int]) -> str:
    return str(value)

# Python 3.10+ — cleaner syntax with |
def find_user(user_id: int) -> str | None:
    ...

def parse(value: str | int) -> str:
    return str(value)

Optional[str] is just shorthand for Union[str, None].

TypeVar: Generic Functions

When we want a function that works with any type but preserves the type relationship:

from typing import TypeVar

T = TypeVar("T")

def first(items: list[T]) -> T:
    return items[0]

# Type checker knows: first([1, 2, 3]) returns int
# Type checker knows: first(["a", "b"]) returns str

Protocol: Structural Typing

Protocols let us define “interfaces” without inheritance. If an object has the right methods, it matches.

from typing import Protocol

class Drawable(Protocol):
    def draw(self) -> None: ...

class Circle:
    def draw(self) -> None:
        print("Drawing circle")

def render(shape: Drawable) -> None:
    shape.draw()

render(Circle())  # works — Circle has a draw() method

Circle doesn’t inherit from Drawable. It just happens to have the right method. This is structural typing — “if it fits, it works.”

Using mypy

mypy is the most popular static type checker. It reads our type hints and reports errors without running the code.

# example.py
def double(x: int) -> int:
    return x * 2

result: str = double(5)  # mypy will flag this
$ mypy example.py
error: Incompatible types in assignment (expression has type "int", variable has type "str")

Common Patterns

from typing import Callable, Any

# Function that takes a callback
def retry(func: Callable[..., Any], times: int = 3) -> Any:
    for _ in range(times):
        try:
            return func()
        except Exception:
            continue

# Type alias
UserID = int
Scores = dict[str, list[int]]

def get_scores(user: UserID) -> Scores:
    return {"math": [90, 85, 92]}

In simple language, type hints are like lane markings on a road. The car can still drive anywhere, but the markings help everyone stay safe. We write them for our future selves, our teammates, and our tools.


35

Walrus Operator and Modern Features

intermediate walrus-operator pattern-matching modern-python 3.8+

Python keeps evolving. Let’s look at the most useful features added in recent versions (3.8 through 3.12) that we’ll actually use in day-to-day code.

Walrus Operator := (Python 3.8)

The walrus operator lets us assign and use a value in the same expression. It’s called “walrus” because := looks like a walrus turned sideways.

Without it, we often compute something, check it, then use it — requiring extra lines:

# Before — compute, then check
line = input("Enter: ")
while line != "quit":
    print(f"You said: {line}")
    line = input("Enter: ")

# With walrus — assign and check in one shot
while (line := input("Enter: ")) != "quit":
    print(f"You said: {line}")

It’s especially handy in list comprehensions and conditions:

# Filter and transform in one pass
results = [clean for name in names if (clean := name.strip()) != ""]

# Avoid computing a regex match twice
import re
if (match := re.search(r"\d+", text)):
    print(f"Found number: {match.group()}")

The rule is: don’t overuse it. If it makes the line harder to read, stick with two lines.

Positional-Only Parameters / (Python 3.8)

We can now force certain parameters to be positional-only using /:

def greet(name, /, greeting="Hello"):
    print(f"{greeting}, {name}!")

greet("Manish")                  # works
greet("Manish", greeting="Hi")   # works
greet(name="Manish")             # TypeError — name is positional-only

Everything before / must be passed by position. This is useful for library authors who want to keep parameter names as implementation details.

Structural Pattern Matching match/case (Python 3.10)

This is Python’s version of switch/case, but way more powerful. It matches patterns, not just values.

def handle_command(command):
    match command.split():
        case ["quit"]:
            print("Goodbye!")
        case ["go", direction]:
            print(f"Going {direction}")
        case ["pick", "up", item]:
            print(f"Picked up {item}")
        case _:
            print("Unknown command")

handle_command("go north")     # Going north
handle_command("pick up sword") # Picked up sword

We can match types, destructure objects, and use guards:

def describe(value):
    match value:
        case int(n) if n > 0:
            print(f"Positive integer: {n}")
        case str(s) if len(s) > 5:
            print(f"Long string: {s}")
        case [first, *rest]:
            print(f"List starting with {first}")
        case {"name": name, "age": age}:
            print(f"{name} is {age}")
        case _:
            print("Something else")

The _ is the wildcard — it matches anything, like default in other languages.

Union Type X | Y (Python 3.10)

Instead of Union[str, int] from the typing module, we can now use the pipe operator:

# Before (3.5-3.9)
from typing import Union, Optional
def parse(value: Union[str, int]) -> str: ...
def find(id: int) -> Optional[str]: ...

# After (3.10+)
def parse(value: str | int) -> str: ...
def find(id: int) -> str | None: ...

Much cleaner. Works in isinstance() too:

isinstance(42, int | str)  # True

Exception Groups and except* (Python 3.11)

When multiple things fail at once (like in asyncio.gather()), we can now handle groups of exceptions:

try:
    raise ExceptionGroup("errors", [
        ValueError("bad value"),
        TypeError("wrong type"),
    ])
except* ValueError as eg:
    print(f"Value errors: {eg.exceptions}")
except* TypeError as eg:
    print(f"Type errors: {eg.exceptions}")

The except* syntax matches specific exception types within the group. Multiple except* blocks can each handle different parts of the same group.

F-string Improvements (Python 3.12)

F-strings got more flexible — we can now nest quotes and use expressions that were previously forbidden:

# Python 3.12 — nested quotes and complex expressions
names = ["Alice", "Bob"]
print(f"Users: {", ".join(names)}")  # was a SyntaxError before 3.12

# Multiline expressions inside f-strings
print(f"Result: {
    sum(x**2 for x in range(10))
}")

type Statement (Python 3.12)

A cleaner way to define type aliases:

# Before
from typing import TypeAlias
UserID: TypeAlias = int

# Python 3.12
type UserID = int
type Matrix = list[list[float]]
type Handler = Callable[[Request], Response]

In simple language, Python’s modern features make our code shorter and more expressive. The walrus operator saves lines, pattern matching replaces clunky if/elif chains, and the pipe syntax makes type hints readable. We don’t need all of them, but knowing they exist helps us write cleaner code.


36

Duck Typing and Protocols

intermediate duck-typing protocols structural-typing EAFP

“If it walks like a duck and quacks like a duck, then it must be a duck.” This is the core philosophy behind Python’s type system. We don’t care what an object is — we care what it can do.

What Is Duck Typing?

In languages like Java, we need to explicitly declare that a class implements an interface. In Python, we just use the object. If it has the method we need, it works.

class Duck:
    def quack(self):
        print("Quack!")

class Person:
    def quack(self):
        print("I'm quacking like a duck!")

def make_it_quack(thing):
    thing.quack()  # we don't check the type — just call the method

make_it_quack(Duck())    # Quack!
make_it_quack(Person())  # I'm quacking like a duck!

make_it_quack doesn’t ask “are you a Duck?” — it just asks “can you quack?” That’s duck typing.

EAFP vs LBYL

Python’s duck typing culture leads to a coding style called EAFP — “Easier to Ask Forgiveness than Permission.” Instead of checking if something is possible before doing it, we just try and handle the failure.

# LBYL (Look Before You Leap) — non-Pythonic
if hasattr(obj, "quack"):
    obj.quack()

# EAFP (Easier to Ask Forgiveness) — Pythonic
try:
    obj.quack()
except AttributeError:
    print("This object can't quack")

EAFP is preferred because it’s faster in the common case (no extra check) and avoids race conditions.

Python’s Built-in Protocols

Python already uses duck typing everywhere through implicit “protocols.” If our object has the right methods, it works with built-in features:

  • Iterable — has __iter__() → works with for loops
  • Callable — has __call__() → can be called like a function
  • Context Manager — has __enter__() and __exit__() → works with with
  • Subscriptable — has __getitem__() → supports obj[key]
  • Comparable — has __eq__(), __lt__(), etc. → works with ==, <
  • Hashable — has __hash__() → can be used as dict key or in sets
class Countdown:
    def __init__(self, start):
        self.current = start

    def __iter__(self):
        return self

    def __next__(self):
        if self.current <= 0:
            raise StopIteration
        self.current -= 1
        return self.current + 1

# Works with for loops — because it has __iter__ and __next__
for num in Countdown(3):
    print(num)  # 3, 2, 1

We didn’t inherit from any base class. We just implemented the right methods, and Python’s for loop works with it.

typing.Protocol: Explicit Structural Typing

Since Python 3.8, the typing module gives us Protocol — a way to formally define what methods an object should have, without requiring inheritance.

from typing import Protocol

class Renderable(Protocol):
    def render(self) -> str: ...

class HTMLWidget:
    def render(self) -> str:
        return "<div>Widget</div>"

class JSONData:
    def render(self) -> str:
        return '{"key": "value"}'

def display(item: Renderable) -> None:
    print(item.render())

display(HTMLWidget())  # works — has render()
display(JSONData())    # works — has render()

HTMLWidget and JSONData never mention Renderable. They just happen to have a render() method. The type checker sees the match and approves.

runtime_checkable

By default, Protocol only works for static type checking (mypy). If we want isinstance() checks at runtime, we add @runtime_checkable:

from typing import Protocol, runtime_checkable

@runtime_checkable
class Closeable(Protocol):
    def close(self) -> None: ...

import io
f = io.StringIO()
print(isinstance(f, Closeable))  # True — StringIO has close()
print(isinstance(42, Closeable)) # False — int doesn't have close()

Note: runtime_checkable only checks if the methods exist, not their signatures. It’s a quick duck-type check, not a full type validation.

Protocol vs ABC (Abstract Base Classes)

Both define interfaces, but they work differently:

  • ABC — requires explicit inheritance (class MyClass(MyABC)). It’s nominal typing — “I declare that I implement this.”
  • Protocol — no inheritance needed. It’s structural typing — “I have the right methods, so I match.”
from abc import ABC, abstractmethod

# ABC approach — must inherit
class Drawable(ABC):
    @abstractmethod
    def draw(self): ...

class Circle(Drawable):  # must explicitly inherit
    def draw(self):
        print("Drawing circle")

# Protocol approach — no inheritance
from typing import Protocol

class Drawable(Protocol):
    def draw(self) -> None: ...

class Square:  # no inheritance needed
    def draw(self):
        print("Drawing square")

Use ABCs when we own the class hierarchy and want to enforce a contract. Use Protocols when we want to accept any object that has the right shape — especially useful for third-party code we can’t modify.

In simple language, duck typing is Python saying “show me what you can do, not who you are.” Protocols take this idea and give it structure — we can describe the shape we need without forcing anyone to inherit from our classes.


Modules & Patterns

37

Modules, Packages, and Imports

beginner modules packages imports init

A module is simply a .py file. When we write import math, Python finds the file math.py (or a built-in equivalent) and makes its contents available to us. That’s it — every Python file we create is already a module.

Importing Modules

There are a few ways to bring code from one file into another.

import math                    # import the whole module
print(math.sqrt(16))           # 4.0 — access with module.name

from math import sqrt, pi     # import specific names
print(sqrt(16))                # 4.0 — no prefix needed

from math import sqrt as s    # alias to avoid name clashes
print(s(16))                   # 4.0

Avoid from math import * — it dumps everything into our namespace and we lose track of where names come from.

Packages

A package is a folder of modules. It groups related files together. Python recognizes a folder as a package if it contains an __init__.py file (can be empty).

myapp/
    __init__.py        # makes myapp a package
    utils.py
    models/
        __init__.py    # makes models a sub-package
        user.py
from myapp.utils import helper_func
from myapp.models.user import User

__init__.py

This file runs when the package is imported. We can use it to expose a clean public API.

# myapp/__init__.py
from .utils import helper_func   # re-export for convenience
__all__ = ["helper_func"]        # controls what `from myapp import *` exports

Absolute vs Relative Imports

  • Absolute imports spell out the full path from the project root: from myapp.models.user import User
  • Relative imports use dots to refer to the current package: from .utils import helper_func (one dot = current package, two dots = parent)

Absolute imports are clearer and preferred in most cases. Relative imports are handy inside a package to avoid repeating long paths.

Circular Imports

This happens when module A imports module B, and module B imports module A. Python gets stuck in a loop.

# a.py
from b import greet    # tries to load b.py, which tries to load a.py...

# b.py
from a import name     # circular!

Fixes: move the shared code into a third module, use lazy imports (import inside a function instead of at the top), or restructure so the dependency goes one way.

if __name__ == "__main__"

Every module has a __name__ attribute. When we run a file directly, __name__ is set to "__main__". When it’s imported, __name__ is the module’s actual name.

# utils.py
def add(a, b):
    return a + b

if __name__ == "__main__":
    # this block only runs when we execute: python utils.py
    print(add(2, 3))  # 5 — great for quick testing

How Python Finds Modules

When we write import something, Python searches in this order:

  1. The current directory
  2. Built-in modules
  3. Directories listed in sys.path (which includes installed packages)
import sys
print(sys.path)  # list of directories Python searches

In simple language, modules are just files, packages are just folders, and imports are how we connect them. Python’s import system is straightforward once we see that everything is just files on disk.


38

Pythonic Code and PEP 8

beginner PEP8 pythonic style idioms

“Pythonic” means writing code the way the Python community expects it. It’s not just about working code — it’s about code that reads naturally and uses the language’s strengths.

The Zen of Python

Run import this in any Python shell and we get 19 guiding principles. The ones that matter most day-to-day:

  • Beautiful is better than ugly — readability counts
  • Explicit is better than implicit — don’t be clever, be clear
  • Simple is better than complex — if there’s a straightforward way, use it
  • There should be one obvious way to do it — Python prefers one right path

Key PEP 8 Rules

PEP 8 is the official style guide. Here’s the cheat sheet:

  • Indentation: 4 spaces (never tabs)
  • Line length: 79 characters max (120 is common in practice)
  • Naming: snake_case for functions/variables, PascalCase for classes, UPPER_SNAKE for constants
  • Blank lines: 2 before top-level definitions, 1 between methods
  • Imports: one per line, grouped (stdlib → third-party → local), at the top of the file

Pythonic Patterns

These are the idioms that separate Python beginners from experienced developers.

Use enumerate instead of range(len(...)):

# Not Pythonic
for i in range(len(names)):
    print(i, names[i])

# Pythonic
for i, name in enumerate(names):
    print(i, name)

Unpack instead of indexing:

# Not Pythonic
point = (3, 7)
x = point[0]
y = point[1]

# Pythonic
x, y = (3, 7)

EAFP over LBYL: Python prefers “Easier to Ask Forgiveness than Permission.” In simple language, try it first and handle the error, rather than checking every condition upfront.

# LBYL (Look Before You Leap) — not Pythonic
if key in dictionary:
    value = dictionary[key]

# EAFP — Pythonic
try:
    value = dictionary[key]
except KeyError:
    value = default

Use join for string concatenation:

# Slow — creates a new string each iteration
result = ""
for word in words:
    result += word + " "

# Fast and Pythonic
result = " ".join(words)

Truthiness checks — keep them simple:

# Not Pythonic
if len(my_list) > 0:
if active == True:

# Pythonic — empty collections are falsy, non-empty are truthy
if my_list:
if active:

List comprehensions over manual loops:

# Verbose
squares = []
for x in range(10):
    squares.append(x ** 2)

# Pythonic
squares = [x ** 2 for x in range(10)]

Common Anti-Patterns

  • Using type(x) == int instead of isinstance(x, int)
  • Bare except: that catches everything (including KeyboardInterrupt)
  • Mutable default arguments like def f(lst=[]) (we’ll cover this gotcha later)
  • Using global when we could pass arguments or use a class
  • Writing Java-style getters/setters instead of using @property

In simple language, Pythonic code is about using Python the way Python wants to be used. Read PEP 8 once, use a linter like ruff or flake8, and it becomes second nature.


39

File Handling and I/O

beginner files IO csv json read-write

Working with files is one of the most common things we do in Python. Read some data, process it, write the result. Let’s see how it all works.

Opening Files

The open() function is our gateway to files. It returns a file object that we can read from or write to.

f = open("notes.txt", "r")   # open for reading (default mode)
content = f.read()
f.close()                     # always close when done!

But we should never do it that way. If an error happens before close(), the file stays open and we leak resources.

The with Statement

This is the right way. The with block automatically closes the file when we’re done — even if an error occurs.

with open("notes.txt", "r") as f:
    content = f.read()
# file is automatically closed here

File Modes

  • "r" — read (default, file must exist)
  • "w" — write (creates file or overwrites existing content)
  • "a" — append (adds to the end)
  • "rb" / "wb" — read/write in binary mode (for images, PDFs, etc.)
  • "r+" — read and write

Reading Files

We have several options depending on what we need.

with open("data.txt", "r", encoding="utf-8") as f:
    whole_thing = f.read()          # entire file as one string
    # or
    one_line = f.readline()         # next line
    # or
    all_lines = f.readlines()       # list of all lines

    # best for large files — reads one line at a time
    for line in f:
        print(line.strip())         # strip() removes trailing newline

Always pass encoding="utf-8" explicitly. The default encoding varies by OS, and that leads to nasty bugs.

Writing Files

with open("output.txt", "w", encoding="utf-8") as f:
    f.write("Hello, world!\n")              # write a string
    f.writelines(["line 1\n", "line 2\n"])  # write a list of strings

Remember: "w" mode wipes the file clean first. Use "a" to add to the end without erasing.

Modern File Paths with pathlib

The pathlib module gives us an object-oriented way to work with file paths. It’s cleaner than string concatenation and works across operating systems.

from pathlib import Path

p = Path("data") / "reports" / "sales.csv"  # builds path with /
print(p.exists())       # True/False
print(p.suffix)         # .csv
print(p.stem)           # sales
content = p.read_text(encoding="utf-8")     # read in one shot
p.write_text("new content", encoding="utf-8")  # write in one shot

Working with JSON

Python’s json module makes it easy to read and write JSON files.

import json

# Writing JSON
data = {"name": "Manish", "age": 25, "skills": ["Python", "JS"]}
with open("data.json", "w") as f:
    json.dump(data, f, indent=2)  # indent for pretty printing

# Reading JSON
with open("data.json", "r") as f:
    loaded = json.load(f)         # returns a dict

Working with CSV

import csv

# Reading CSV
with open("people.csv", "r") as f:
    reader = csv.reader(f)
    header = next(reader)         # grab the header row
    for row in reader:
        print(row)                # each row is a list of strings

# Writing CSV
with open("output.csv", "w", newline="") as f:
    writer = csv.writer(f)
    writer.writerow(["name", "age"])
    writer.writerow(["Manish", 25])

In simple language, with open(...) is the pattern we use 99% of the time. Pick the right mode, don’t forget encoding, and let the with block handle cleanup.


40

Design Patterns in Python

advanced design-patterns singleton factory observer

Design patterns are reusable solutions to common problems. The good news? Python’s features — first-class functions, duck typing, decorators — make many patterns way simpler than in Java or C++. Some patterns are so baked into the language that we use them without realizing.

Singleton

Ensures only one instance of a class exists. Think of it like a database connection pool — we want exactly one.

class Database:
    _instance = None

    def __new__(cls):
        if cls._instance is None:
            cls._instance = super().__new__(cls)
        return cls._instance

db1 = Database()
db2 = Database()
print(db1 is db2)  # True — same object

The Pythonic shortcut? Just use a module. Module-level variables are singletons by nature — Python only loads a module once.

Factory

Creates objects without exposing the creation logic. In Python, we often use @classmethod as a factory.

class User:
    def __init__(self, name, role):
        self.name = name
        self.role = role

    @classmethod
    def admin(cls, name):
        return cls(name, role="admin")

    @classmethod
    def guest(cls, name):
        return cls(name, role="guest")

admin = User.admin("Manish")   # cleaner than User("Manish", "admin")
guest = User.guest("Visitor")

Observer

When one object changes, all its “watchers” get notified. Think of it like a newsletter — subscribers get updates automatically.

class EventEmitter:
    def __init__(self):
        self._listeners = {}

    def on(self, event, callback):
        self._listeners.setdefault(event, []).append(callback)

    def emit(self, event, *args):
        for cb in self._listeners.get(event, []):
            cb(*args)

emitter = EventEmitter()
emitter.on("login", lambda user: print(f"{user} logged in"))
emitter.emit("login", "Manish")  # Manish logged in

Strategy

Swap out an algorithm at runtime. In languages like Java, this needs interfaces and classes. In Python, we just pass a function.

def sort_by_name(users):
    return sorted(users, key=lambda u: u["name"])

def sort_by_age(users):
    return sorted(users, key=lambda u: u["age"])

def display_users(users, strategy):
    for user in strategy(users):
        print(user)

users = [{"name": "Zara", "age": 25}, {"name": "Aman", "age": 30}]
display_users(users, sort_by_name)  # sorted by name
display_users(users, sort_by_age)   # sorted by age

In simple language, first-class functions eliminate the need for a whole Strategy class hierarchy.

Decorator Pattern

We already know this one from Python decorators. Wrap a function to extend its behavior without modifying it.

def log_calls(func):
    def wrapper(*args, **kwargs):
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

@log_calls
def greet(name):
    return f"Hello, {name}"

greet("Manish")  # prints "Calling greet", then returns "Hello, Manish"

Iterator

Built right into Python. Any object with __iter__ and __next__ is an iterator. We use them every time we write a for loop.

class Countdown:
    def __init__(self, start):
        self.current = start

    def __iter__(self):
        return self

    def __next__(self):
        if self.current <= 0:
            raise StopIteration
        val = self.current
        self.current -= 1
        return val

for num in Countdown(3):
    print(num)  # 3, 2, 1

Context Manager

The with statement pattern. Handles setup and teardown automatically — great for files, locks, database connections.

class Timer:
    def __enter__(self):
        import time
        self.start = time.time()
        return self

    def __exit__(self, *args):
        import time
        print(f"Elapsed: {time.time() - self.start:.2f}s")

with Timer():
    sum(range(1_000_000))  # prints elapsed time when block exits

The key takeaway: Python’s dynamic nature — first-class functions, duck typing, protocols — means we get many patterns “for free.” We don’t need heavy class hierarchies when a simple function or module does the job.


41

Common Output Questions

intermediate interview output-questions tricky gotchas

These are the classic “What’s the output?” questions that come up in Python interviews. Each one tests a specific gotcha. Let’s walk through them.

1. Mutable Default Argument

def add_item(item, lst=[]):
    lst.append(item)
    return lst

print(add_item("a"))
print(add_item("b"))

Output: ['a'] then ['a', 'b']

The default list [] is created once when the function is defined, not on each call. Every call that uses the default shares the same list object. Fix: use lst=None and create a new list inside the function.

2. Late Binding Closures

funcs = [lambda: i for i in range(3)]
print([f() for f in funcs])

Output: [2, 2, 2]

Closures capture the variable, not the value. By the time we call the lambdas, the loop is done and i is 2. Fix: use a default argument to capture the current value: lambda i=i: i.

3. Integer Caching

a = 256
b = 256
print(a is b)

c = 257
d = 257
print(c is d)

Output: True then False (in the standard REPL)

Python caches integers from -5 to 256 for performance. So a and b point to the same object. Numbers outside that range create new objects each time. This is an implementation detail of CPython — never rely on is for number comparisons.

4. String Interning

a = "hello"
b = "hello"
print(a is b)

c = "hello world"
d = "hello world"
print(c is d)

Output: True then False (typically)

Python automatically interns (reuses) strings that look like identifiers — no spaces, simple characters. "hello" gets interned, "hello world" might not. Like integer caching, this is a CPython optimization. Always use == for string comparison, never is.

5. List Multiplication Gotcha

grid = [[0]] * 3
grid[0][0] = 5
print(grid)

Output: [[5], [5], [5]]

The * operator doesn’t create three separate lists. It creates three references to the same inner list. Changing one changes all of them. Fix: use a list comprehension: [[0] for _ in range(3)].

6. Exception Variable Scope

try:
    raise ValueError("oops")
except ValueError as e:
    error = e

print(type(error))

try:
    print(e)
except NameError:
    print("e is gone!")

Output: <class 'ValueError'> then e is gone!

The variable e is deleted after the except block exits. This is by design — it prevents reference cycles with the traceback. But if we assign it to another name (like error), that reference survives.

7. Tuple with Mutable Element

t = ([1, 2],)
t[0].append(3)
print(t)

Output: ([1, 2, 3],)

Wait, tuples are immutable! Yes, but the tuple holds a reference to the list, and the reference doesn’t change. We’re not reassigning t[0] — we’re mutating the list it points to. The tuple itself is unchanged (same reference), but the list inside it grew.

8. Chained Comparisons

print(1 < 2 < 3)
print(1 < 2 > 0)
print(3 > 2 > 3)

Output: True, True, False

Python chains comparisons. 1 < 2 < 3 becomes 1 < 2 and 2 < 3. Same idea: 1 < 2 > 0 becomes 1 < 2 and 2 > 0, which is True and True. This is different from most languages where 1 < 2 > 0 would evaluate left-to-right as True > 0.

9. is vs == with Lists

a = [1, 2, 3]
b = [1, 2, 3]
c = a

print(a == b)
print(a is b)
print(a is c)

Output: True, False, True

== checks if the values are equal. is checks if they’re the same object in memory. a and b have the same content but are two different list objects. c = a doesn’t copy — it creates another reference to the same object. Think of is as “are these the same box?” and == as “do these boxes contain the same stuff?”

In simple language, most of these gotchas come down to understanding the difference between objects and references, and knowing that Python reuses objects in surprising ways for performance.