You can also find all 100 answers here π Devinterview.io - Python
Python is a high-level, multi-paradigm programming language known for its simplicity, readable syntax, and a massive ecosystem. It is widely used in AI, web development, and automation.
Python is an interpreted language, meaning code is executed line-by-line by the CPython interpreter. This removes the need for a separate compilation step, enabling an interactive workflow through REPLs and notebooks.
Python uses significant indentation rather than curly braces or keywords to define code blocks. Its syntax mimics natural English, adhering to the PEP 8 style guide to ensure high maintainability and readability.
Python is platform-independent. Code written on one operating system (e.g., Windows) runs seamlessly on others (e.g., Linux, macOS) without modification, provided the Python interpreter is installed.
Python encourages modularity through modules and packages. Developers can organize complex logic into reusable components, which facilitates the development of large-scale applications.
The Python Package Index (PyPI) hosts over 500,000 libraries. It provides specialized tools for Data Science (Pandas), Machine Learning (PyTorch, TensorFlow), and Web Development (Django, FastAPI).
Python is a "general-purpose" language. It dominates diverse fields including Artificial Intelligence, scientific computing, backend web development, and DevOps scripting.
Python manages memory automatically using Reference Counting and a Cyclic Garbage Collector. This abstracts low-level memory allocation, preventing common errors like memory leaks and segmentation faults.
Python is dynamically typed, meaning variable types are determined at runtime. However, modern Python (3.5+) supports optional type hinting, allowing for better IDE support and static analysis.
# Dynamic typing with Type Hinting
def add_numbers(a: int, b: int) -> int:
return a + b
result = add_numbers(10, 20) # Types inferred and checkedEverything in Python is an object. It supports Object-Oriented Programming (OOP), functional programming, and procedural styles, allowing developers to choose the best approach for their specific problem.
Python can be extended with modules written in C, C++, or Rust for performance-critical tasks. It acts as a "glue language," easily integrating with existing legacy codebases and low-level APIs.
Python source code is not directly executed by the CPU. Instead, it undergoes a multi-stage process involving both compilation and interpretation.
Python utilizes a hybrid model to balance portability and ease of development.
- Bytecode Compilation: When a script is run, the Python interpreter first compiles the high-level source code into bytecode. This is an intermediate, platform-independent representation of the code, often stored in
__pycache__directories as.pycfiles. - Python Virtual Machine (PVM): The PVM is the runtime engine that executes the bytecode. It iterates through the instructions and maps them to the corresponding machine-specific actions.
This "compile then interpret" workflow allows Python to be cross-platform, as the same bytecode can run on any system with a compatible PVM.
Unlike languages like C++ that compile to native machine code (binary instructions for a specific CPU), Python's bytecode is a higher-level abstraction.
While this abstraction ensures flexibility, the additional layer of the PVM can introduce overhead. However, modern implementations mitigate this by using internal optimizations and specialized opcodes to speed up the execution of common patterns.
The transformation from text to executable instructions follows these standard compiler phases:
- Lexical Analysis: The source code is broken into tokens (keywords, identifiers, literals).
- Syntax Parsing: Tokens are organized into a Parse Tree or Abstract Syntax Tree (AST) based on Pythonβs grammar rules.
- Semantic Analysis: The AST is analyzed for context (e.g., variable scope and name resolution).
- Bytecode Generation: The final AST is converted into a sequence of bytecode instructions.
As of Python 3.11+, CPython introduced the Specializing Adaptive Interpreter. This mechanism identifies "hot" (frequently executed) code and optimizes it by replacing generic bytecode instructions with specialized versions tailored to the specific data types encountered at runtime.
Furthermore, Python 3.13 introduced an experimental JIT compiler based on a "copy-and-patch" architecture. This allows the PVM to transform certain bytecode sequences directly into machine code, narrowing the performance gap between Python and compiled languages.
We can inspect the bytecode generated by the compiler using the dis module.
import dis
def example_func():
# Constant folding: 15 * 20 is calculated at compile-time
return 15 * 20
# Disassemble the function to see PVM instructions
dis.dis(example_func)Bytecode Output:
3 0 LOAD_CONST 1 (300)
2 RETURN_VALUE
In this output, LOAD_CONST pushes the pre-calculated value RETURN_VALUE returns it to the caller. This demonstrates how the Python compiler performs optimizations like constant folding before the PVM even begins interpretation.
PEP 8 is the official Style Guide for Python Code, established as a Python Enhancement Proposal. It defines the conventions for formatting and structuring Python code to maximize readability, consistency, and maintainability across the global Python ecosystem.
As Python creator Guido van Rossum famously stated, "code is read much more often than it is written." Adhering to PEP 8 reduces cognitive load, allowing developers to focus on logic rather than formatting quirks.
- Readability: Prioritizes human comprehension; if a style choice makes code harder to understand, readability takes precedence.
- Consistency: Ensures a uniform "look and feel" within a project and across the community.
- The "Pythonic" Way: Encourages the philosophy that there should be oneβand preferably only oneβobvious way to write a solution.
- Indentation: Use exactly 4 spaces per indentation level. Do not use tabs.
-
Line Length: Limit lines to a maximum of 79 characters. While modern tools like Black or Ruff often default to
$88$ or$100$ characters, 79 remains the standard for maximizing compatibility with multi-file side-by-side views. - Blank Lines: Use two blank lines to surround top-level function and class definitions. Use a single blank line to separate methods inside a class.
- Classes: Use
CapWords(PascalCase). - Functions and Variables: Use
lower_case_with_underscores(snake_case). - Constants: Use
UPPER_CASE_WITH_UNDERSCORES. - Modules: Use short, lowercase names; underscores are discouraged unless necessary.
-
Operators: Surround assignment (
$=$ ), comparisons ($==, <, >$ ), and Booleans (and,or) with a single space. - Delimiters: Avoid extraneous whitespace inside parentheses, brackets, or braces. Always follow a comma or semicolon with a space.
- Docstrings: Use triple double quotes (
""") for all public modules, functions, classes, and methods. - Comments: Ensure comments are up-to-date and explain the "why" (intent) rather than the "what" (obvious logic).
This script utilizes modern pathlib (the Pythonic standard for paths) while adhering to PEP 8 conventions:
from pathlib import Path
class DirectoryScanner:
"""Provides utilities for scanning file systems."""
def __init__(self, target_directory: str):
self.target_path = Path(target_directory)
def scan_files(self) -> None:
"""Recursively iterates through directory and prints file paths."""
if not self.target_path.is_dir():
return
for file_path in self.target_path.rglob("*"):
if file_path.is_file():
print(file_path.resolve())
if __name__ == "__main__":
scanner = DirectoryScanner("/path/to/data")
scanner.scan_files()In Python, memory allocation and garbage collection are managed automatically by the runtime, primarily through the Python Memory Manager. This system abstracts complex memory operations, ensuring safety and developer productivity.
Python manages a private Heap containing all objects and data structures. The Python Memory Manager controls this heap through several layers:
-
Hierarchical Allocation: For small objects (
$\le 512$ bytes), Python uses a specialized allocator calledobmalloc. It organizes memory into Arenas ($256$ KB), which are subdivided into Pools ($4$ KB), and further into Blocks. -
Object-Specific Allocators: Certain types (like
intorlist) use dedicated free lists to speed up allocation and reduce fragmentation. -
Raw Memory: Large objects (typically
$> 512$ bytes) bypassobmallocand use the standard Cmalloc()to request memory directly from the OS. - Stack vs. Heap: The Heap stores the actual objects and data, while the Stack stores references to those objects and the execution frames.
Python utilizes Reference Counting as its primary mechanism, supplemented by a Generational Garbage Collector to handle cyclic dependencies.
Every Python object contains a field in its header called ob_refcnt.
- The count increases when an object is assigned to a variable or added to a container.
- The count decreases when a reference is deleted or goes out of scope.
- When
$ref_count == 0$ , the memory is immediately deallocated. Modern Python (3.12+) also uses Immortal Objects, which have a fixed reference count to improve performance for shared constants.
import sys
a = [1, 2, 3]
print(sys.getrefcount(a)) # Output: 2 (variable 'a' and the argument to getrefcount)
b = a
print(sys.getrefcount(a)) # Output: 3To reclaim Circular References (where objects point to each other), Python's gc module uses a cycle-detecting algorithm.
-
Generations: Objects are categorized into
$G_0$ ,$G_1$ , and$G_2$ . -
Promotion: New objects start in
$G_0$ . If they survive a collection cycle, they are promoted to$G_1$ , and eventually$G_2$ . -
Thresholds: Collection is triggered when the number of allocations minus deallocations exceeds a specific Threshold.
$G_0$ is scanned most frequently, while$G_2$ is scanned least often, following the heuristic that younger objects are more likely to die young.
Python's architecture prioritizes safety over manual control:
- Abstraction: Unlike C, where developers use
malloc()andfree(), Pythonβs manager prevents Memory Leaks and Dangling Pointers automatically. - Overhead: Python objects have significant overhead; every object must store its Type Pointer and Reference Count. A 4-byte
intin C is significantly smaller than a Pythonintobject. - Performance: Python's automatic management introduces occasional "stop-the-world" pauses during Generational GC cycles, whereas C offers deterministic performance at the cost of manual memory tracking.
Python offers numerous built-in data types that provide varying functionalities and utilities for efficient memory management and data manipulation. These are generally categorized by their mutability.
Represents arbitrary-precision integers, such as 42 or -10. In modern Python, the int type handles all integer sizes automatically, removing the need for a separate "long" type.
Represents double-precision floating-point numbers, such as 3.14 or -0.001, following the IEEE 754 standard.
Used for mathematical computations involving a real and an imaginary part, represented as
A subclass of integers representing logical values: True (internally 1) and False (internally 0).
An immutable sequence of Unicode characters. Python strings are optimized for performance and support extensive slicing and formatting capabilities.
An ordered, immutable collection of items. Tuples are often used to store heterogeneous data and are hashable, allowing them to be used as keys in dictionaries.
An immutable version of a set. It is hashable and contains unique elements, making it suitable as a dictionary key or an element of another set.
An immutable sequence of single bytes (8-bit values). It is primarily used for handling binary data, such as images, encoded text, or network packets.
Represents an immutable sequence of numbers, typically used for iterating in loops. It is memory-efficient because it calculates values on demand rather than storing them in memory (lazy evaluation).
The type for the singleton object None, which is used to signal the absence of a value, a null pointer equivalent, or a default state.
A dynamic, ordered collection of items. Lists are highly versatile, supporting indexing, slicing, and various in-place modifications like append() and extend().
An unordered collection of unique, hashable items. Sets provide optimized
A collection of key-value pairs. Since Python 3.7+, dictionaries maintain insertion order as a guaranteed language feature while providing
A mutable counterpart to bytes. It allows in-place modification of binary sequences without the overhead of creating new objects.
A generalized pointer that allows accessing the internal data of an object that supports the buffer protocol (like bytes or bytearray) without copying the data.
Space-efficient storage for basic values (integers, floats) constrained to a single type. It is available via the array module and is more memory-efficient than a list for large datasets of primitive types.
A double-ended queue designed for fast collections module and is preferred over lists for queue implementations.
The most fundamental base type in Python. Every class inherits from object, and it can be instantiated to create a unique sentinel object.
A simple object that allows for the attribution of arbitrary properties via dot notation. It is often used as a lightweight, mutable alternative to an empty class.
The internal type for user-defined functions. Functions are first-class citizens in Python, meaning they can be passed as arguments, returned from other functions, and assigned attributes.
# Illustrating core built-in types
integer_val = 10 # int
string_val = "Python 2026" # str
list_val = [1, 2, 3] # list (mutable)
tuple_val = (1, 2, 3) # tuple (immutable)
dict_val = {"key": "value"} # dict (mapping)
# Using complex numbers and sets
complex_num = 2 + 3j # complex
unique_elements = {1, 2, 2, 3} # set: {1, 2, 3}
# Binary and specialized types
raw_data = b"binary" # bytes
mutable_buffer = bytearray(raw_data) # bytearrayThe distinction between mutable and immutable objects is a core principle of Pythonβs data model, determining how objects are stored, accessed, and modified in memory.
- Mutable Objects: These objects allow their internal state or content to be changed in-place after creation. Modifications do not change the objectβs unique identity, which is retrieved via
id(). - Immutable Objects: These objects cannot be altered once created. Any operation that purports to modify an immutable object actually results in the creation of a new object with a unique identity and the updated value.
- Mutable:
list,set,dict,bytearray, and most user-defined classes (unless explicitly made immutable). - Immutable:
int,float,complex,str,tuple,frozenset,bytes, andbool.
The following example illustrates how Python handles memory and identity for both types:
# Immutable objects (str, tuple)
text = "Hello"
my_tuple = (1, 2, 3)
try:
text[0] = 'M' # Raises TypeError: 'str' object does not support item assignment
my_tuple[0] = 100 # Raises TypeError
except TypeError as e:
print(f"Error: {e}")
# Mutable objects (list)
my_list = [1, 2, 3]
original_id = id(my_list)
my_list.append(4) # Modified in-place
print(id(my_list) == original_id) # Output: True (Identity is preserved)
# Reassignment of an immutable integer
x = 10
first_id = id(x)
x += 1 # Creates a new int object; x now points to a different id
print(id(x) == first_id) # Output: FalseImmutable objects provide data integrity and are inherently thread-safe because their state cannot be changed by concurrent processes. Crucially, only immutable objects (or containers like tuples containing only immutable elements) are hashable. This allows them to be used as dictionary keys or elements in a set, as their hash value remains constant.
Mutable objects are more memory-efficient when dealing with large datasets or frequent updates. Modifying a collection in-place typically has a time complexity of
- Memory Management: Python optimizes memory for specific immutable objects through interning (e.g., small integers and certain strings are reused rather than recreated). Mutable objects are managed via dynamic resizing and lack this specific reuse optimization.
- Function Arguments: Python utilizes Pass-by-Assignment. When a mutable object is passed to a function, any modifications made inside the function persist in the original object. When an immutable object is passed, the original variable remains unchanged because any "modification" inside the function scope creates a local reference to a new object.
Exception handling in Python is a structured mechanism used to manage runtime errors gracefully, ensuring program stability. It relies on a block-based syntax to catch, process, and recover from exceptions without terminating the execution flow unexpectedly.
The standard syntax involves four primary blocks:
try: Contains code that might raise an exception.except: Executes if an exception occurs. It is best practice to catch specific exceptions (e.g.,ValueError) rather than using a bareexcept:.else: Executes only if thetryblock completes without any exceptions.finally: Executes regardless of the outcome. This is used for resource cleanup, such as closing file descriptors or database connections.
try:
file = open("data.txt", "r")
data = int(file.read())
except FileNotFoundError:
print("Error: File missing.")
except ValueError:
print("Error: Invalid data format.")
else:
print(f"Data processed: {data}")
finally:
if 'file' in locals():
file.close()Introduced in Python 3.11, Exception Groups (ExceptionGroup) allow the propagation of multiple unrelated exceptions simultaneously. This is essential for asynchronous tasks or concurrent executions (e.g., asyncio). The except* syntax allows you to handle specific exception types within a group.
try:
raise ExceptionGroup("Batch error", [ValueError("Invalid ID"), TypeError("Limit reached")])
except* ValueError as eg:
for e in eg.exceptions:
print(f"Handled ValueError: {e}")
except* TypeError as eg:
for e in eg.exceptions:
print(f"Handled TypeError: {e}")The with keyword implements the Context Manager protocol, utilizing __enter__ and __exit__ methods. It ensures that resources are automatically released, effectively handling exceptions that occur during resource usage without explicit finally blocks.
with open("log.txt", "a") as f:
f.write("Log entry...")
# File is closed automatically, even if write fails.The raise statement manually triggers an exception. Implicit and Explicit Chaining preserves the traceback of the original cause using the from keyword, which is critical for debugging complex failures.
def fetch_api():
try:
return perform_request()
except ConnectionError as e:
# Explicitly chain the original error to a custom exception
raise RuntimeError("API unavailable") from eModern Python allows attaching metadata to exceptions using the add_note() method (3.11+), which is useful for adding contextual information without modifying the exception's message.
For unhandled exceptions, sys.excepthook allows developers to define a custom global handler for logging or telemetry before the interpreter exits.
import sys
def global_logger(exctype, value, tb):
print(f"CRITICAL: {value}")
sys.excepthook = global_loggerWithin an except block:
pass: Silently suppresses an exception (use with caution).continue: Skips the current iteration of a loop when an error occurs, allowing the rest of a dataset to be processed.
for item in raw_data:
try:
process(item)
except DataEntryError:
continue # Skip corrupted record and move to nextLists and Tuples are ordered sequences in Python, but they differ significantly in mutability, performance, and storage behavior.
- Mutability: Lists are mutable, meaning elements can be added, removed, or modified after creation. Tuples are immutable; their structure and references cannot be changed once instantiated.
-
Memory & Performance: Tuples have a lower memory footprint. Lists utilize over-allocation to facilitate
$O(1)$ amortized time forappend()operations, whereas tuples are allocated the exact amount of memory required. Consequently, tuples are faster to create and slightly faster to iterate over. -
Hashability: Because they are immutable, tuples (containing only hashable elements) are hashable and can be used as dictionary keys or set elements. Lists are unhashable and will raise a
TypeErrorif used in such contexts.
- Lists are ideal for collections of items that require frequent updates, resizing, or sorting during a program's execution.
- Tuples are preferred for fixed data structures, protecting data from accidental modification, and serving as constant keys in mapping types. They are often used for heterogeneous data (records), whereas lists are typical for homogeneous data.
my_list = ["apple", "banana"]
my_list.append("cherry")
my_list[1] = "blackberry"my_tuple = (1, 2, 3, 4)
# Unpacking a tuple
a, b, c, d = my_tuple
# my_tuple[0] = 5 # This would raise a TypeErrorIn Python, a dictionary is an ordered collection of key:value pairs that provides
- Keys: Must be unique and hashable (immutable types like
str,int, or atuplecontaining only hashable elements). - Values: Can be of any data type, including lists or other dictionaries, and can be duplicated.
- Ordered: Since Python 3.7+, dictionaries are guaranteed to maintain the insertion order of items.
- Mutable: You can add, remove, or modify items after the dictionary is created.
There are several idiomatic ways to create a dictionary in Python:
- Literal Syntax: Defining pairs directly within curly braces
{}. dict()Constructor: Utilizing keyword arguments or a sequence of key-value pairs.- Dictionary Comprehension: Generating a dictionary dynamically using an iterable and logic.
zip()Function: Mapping two separate iterables (one for keys, one for values) together.fromkeys()Method: Creating a dictionary with a predefined set of keys and an optional shared default value.
The most common and readable way to initialize a dictionary.
# Creating a dictionary using literals
student = {
"name": "John Doe",
"age": 21,
"courses": ["Math", "Physics"]
}Useful when keys are valid identifiers or when converting existing sequences into a dictionary mapping.
# Using keyword arguments
user = dict(username="jdoe", status="active", id=452)
# Using a list of tuples (sequence of key-value pairs)
data = dict([("id", 1), ("type", "admin")])A concise way to transform one iterable into a dictionary, providing high performance for dynamic creation.
# Squaring numbers using comprehension
squares = {x: x**2 for x in range(1, 6)}
# Result: {1: 1, 2: 4, 3: 9, 4: 16, 5: 25}Ideal for merging two distinct lists or tuples into a dictionary mapping.
keys = ["CPU", "GPU", "RAM"]
values = ["Intel i7", "NVIDIA RTX", "32GB"]
# Pairing keys and values into a dictionary
specs = dict(zip(keys, values))Used to initialize a dictionary with a specific set of keys and a shared default value.
keys = ["task1", "task2", "task3"]
# Initializes all keys with the value "Pending"
todo_list = dict.fromkeys(keys, "Pending")Both the == and is operators in Python facilitate different types of comparisons:
- The
==operator evaluates value equality. It checks if the data held by two objects is equivalent, typically by invoking the__eq__method. - The
isoperator evaluates object identity. It verifies if two variables reference the exact same instance in memory.
In Python, every object has a unique identifier assigned by the interpreter. The is operator compares these memory addresses. If a is b evaluates to True.
is: Compares the identity (memory address) of two objects using theid()function.==: Compares the value (contents) of two objects by checking for logical equivalence.
# Initialize two lists with identical values
list_a = [1, 2, 3]
list_b = [1, 2, 3]
list_c = list_a
print(list_a == list_b) # True: The values are the same
print(list_a is list_b) # False: They are different objects in memory
print(list_a is list_c) # True: Both point to the same memory address==: Use for standard equality comparisons, such as comparing strings, numbers, or data structures where the content is the primary concern.is: Use exclusively when comparing against singletons, most commonly forNonechecks (e.g.,if val is None:), to ensure identity-level precision. Avoid usingisfor comparing literals like integers or strings, as this relies on implementation-specific behavior such as interning.
Python functions are first-class objects that encapsulate logic, enable reusability, and manage state through scoping. In Python, a function is an instance of the function class, allowing it to be passed as an argument, returned from other functions, and assigned to variables.
- Function Signature: Defined by the
defkeyword, it includes the name, parameters (positional, keyword-only, or variadic), and optional type hints for static analysis. - Function Body: An indented block of code containing the logic. Python compiles this into bytecode (stored in the
__code__attribute) upon definition. - Return Statement: Explicitly exits a function with a value. If omitted, the function implicitly returns the
Nonesingleton. - Function Object: When defined, Python creates a name binding in the current namespace that points to the function object stored in memory.
When a function is invoked, the Python interpreter performs the following steps:
- Frame Creation: A Frame Object is pushed onto the Call Stack. This frame encapsulates the execution environment, including the local symbol table and the evaluation stack.
- Parameter Binding: Arguments are assigned to parameters using pass-by-object-reference (also known as pass-by-assignment). If a mutable object (e.g., a
list) is passed, modifications inside the function affect the original object. - Bytecode Execution: The Python Virtual Machine (PVM) executes the bytecode. Since Python 3.11+, the Specializing Adaptive Interpreter may optimize this bytecode inline for increased performance.
- Return and Cleanup: Upon a
returnor reaching the end of the block, the return value is passed to the calling context, the frame is popped, and local variables are eligible for Garbage Collection via reference counting.
Python uses the LEGB Rule to resolve names during execution:
- Local (L): Names assigned within the function (and not declared global).
- Enclosing (E): Names in the local scope of any enclosing functions (relevant for Closures).
- Global (G): Names assigned at the top level of the module or declared via the
globalkeyword. - Built-in (B): Names pre-assigned in the
builtinsmodule (e.g.,len,range).
def outer_function(x: int):
# Enclosing scope
y = 10
def inner_function(z: int) -> float:
# Local scope accessing Enclosing (x, y) and Global
return (x + y + z) * 1.0
return inner_function
# Usage
closure = outer_function(5)
result = closure(3) # Result: 18.0- Closures: Functions that "remember" values from their enclosing lexical scope even after the outer function has finished executing, stored in the
__closure__attribute. - Decorators: Higher-order functions that take a function as an argument and return a new function, typically to extend behavior without modifying the source code.
- Recursion Limits: Python imposes a maximum recursion depth (default is usually 1000) to prevent infinite recursion from exhausting the C stack, though modern interpreters handle frame management more efficiently.
Functions should ideally be pureβreturning values based solely on inputs without modifying global state. Using the nonlocal or global keywords allows modification of outer scopes but increases complexity and reduces predictability. Encapsulating logic within functions ensures that data
A lambda function is a small, anonymous function defined using the lambda keyword. Unlike standard functions defined with def, lambdas are typically used for short-lived operations where a formal name is unnecessary for the logic being performed.
Lambdas are not bound to a name by default, making them ideal for one-off utility tasks within a localized scope, such as inside a function call.
The body is strictly limited to a single expression. This facilitates brevity but prevents the use of multiple statements, assignments, or complex control flow (like loops) within the function body.
There is no explicit return statement; the evaluated result of the expression is returned automatically. The syntax follows:
Lambdas streamline code by allowing function definitions in-place, reducing the structural overhead of defining a full function block for simple transformations or predicates.
Functions like map() and filter() often take a lambda as an argument to define transformations or predicates on the fly.
numbers = [1, 2, 3, 4]
# Squaring elements in a list
squared = list(map(lambda x: x**2, numbers)) # [1, 4, 9, 16]Note: To use reduce(), you must import it from the functools module.
Lambdas frequently serve as a custom key for sorting complex data structures. This is highly effective for ordering dictionaries or objects by specific attributes:
data = [{'name': 'Alice', 'age': 30}, {'name': 'Bob', 'age': 25}]
# Sort by age attribute
sorted_data = sorted(data, key=lambda x: x['age'])Lambdas are used in GUI frameworks or asynchronous programming as callbacks, where a function is passed as an argument to be executed upon an event. They are also useful as return values from higher-order functions to create specialized behaviors (closures) without naming every intermediate function.
Named functions are preferred when logic is not immediately obvious. PEP 8 specifically discourages binding lambdas to identifiers (e.g., f = lambda x: x+1) because it defeats the purpose of anonymity; a def statement should be used for such cases.
Lambdas cannot contain docstrings, making them harder to document. Additionally, because they are anonymous, tracebacks will identify them only as <lambda>, which can complicate the debugging process compared to named functions.
In Python, *args and **kwargs are special syntaxes used in function definitions to handle a variable number of arguments, providing flexibility in how functions are called and how data is passed.
*args collects extra positional arguments into a tuple, while **kwargs collects extra keyword arguments into a dictionary.
- Mechanism: The single asterisk
*is the unpacking operator. When placed before a parameter name in a function definition, it packs any additional positional arguments into a tuple. Whileargsis the standard naming convention, only the*is syntactically mandatory. - Use-Case: Ideal for functions where the number of inputs is unknown at runtime, such as mathematical aggregators or API wrappers that forward arguments to other functions.
def sum_all(*args):
# args is a tuple of all positional arguments passed
return sum(args)
print(sum_all(1, 2, 3, 4, 5)) # Output: 15- Mechanism: The double asterisk
**captures any named (keyword) arguments that were not explicitly defined in the parameter list. These are stored in a dictionary, where keys represent the argument names and values represent the argument data. - Use-Case: Widely used in class inheritance (passing parameters to a superclass constructor) and decorators to handle arbitrary configuration settings without modifying the function signature.
def print_metadata(**kwargs):
# kwargs is a dictionary of named arguments
for key, value in kwargs.items():
print(f"{key}: {value}")
print_metadata(version="1.2.0", author="Dev", status="Production")
# Output:
# version: 1.2.0
# author: Dev
# status: ProductionPython requires a specific order in the function signature to prevent ambiguity and ensure correct argument binding. The hierarchy is defined as follows:
- Standard Positional Arguments
- Variable Positional Arguments (
*args) - Keyword-Only Arguments (with or without default values)
- Variable Keyword Arguments (
**kwargs)
def complex_func(a, b, *args, mode="fast", **kwargs):
print(a, b, args, mode, kwargs)
complex_func(1, 2, 3, 4, mode="slow", debug=True)
# Output: 1 2 (3, 4) slow {'debug': True}This structure allows the interpreter to differentiate between values intended for specific parameters and those intended for the flexible *args or **kwargs containers. Note that in modern Python, a bare * can also be used in a signature to force all subsequent arguments to be keyword-only.
In Python, a decorator is a design pattern and a language feature that allows you to dynamically modify or extend the behavior of functions or classes without permanently changing their source code. They are a form of metaprogramming where one function (the wrapper) modifies the behavior of another.
- Higher-Order Functions: Decorators are functions that take another function as an argument and return a new function.
- Closures: They leverage Pythonβs support for closures to "wrap" the original function, allowing code execution both before and after the target function runs.
- First-Class Objects: Since functions in Python are first-class objects, they can be passed as arguments, assigned to variables, and returned from other functions.
- Authorization: Verifying user permissions or authentication tokens before executing a route or method.
-
Logging and Profiling: Measuring execution time (
$\Delta t$ ) or recording function call frequency. -
Caching/Memoization: Optimizing performance by storing results of expensive computations (e.g., via
@functools.cache). - Validation: Enforcing type safety, data constraints, or input sanitization.
- Rate Limiting: Throttling how often a specific function can be invoked within a set timeframe.
Modern Python development requires functools.wraps to preserve the original function's metadata (such as __name__, __doc__, and type annotations).
from functools import wraps
from typing import Callable, Any
def my_decorator(func: Callable) -> Callable:
@wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
# Logic executed before the function
result = func(*args, **kwargs)
# Logic executed after the function
return result
return wrapper
@my_decorator
def say_hello(name: str) -> None:
"""Greets the user."""
print(f"Hello, {name}!")To pass arguments to the decorator itself, you must implement an additional layer of nesting, known as a decorator factory.
from functools import wraps
from typing import Callable, Any
def repeat(times: int) -> Callable:
def decorator(func: Callable) -> Callable:
@wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
result = None
for _ in range(times):
result = func(*args, **kwargs)
return result
return wrapper
return decorator
@repeat(times=3)
def greet() -> None:
print("Execution...")The @decorator syntax is syntactic sugar. The following two approaches are functionally identical:
# Modern Syntactic Sugar
@my_decorator
def func(): ...
# Manual Equivalent
func = my_decorator(func)Without @wraps(func), the decorated function would lose its identity, appearing to debugging tools and IDEs as the internal wrapper function. In professional 2026 development, preserving this introspection capability is essential for maintainability, documentation generators, and robust type-checking via tools like MyPy or Pyright.
You can create a Python module through the following standard methods:
- Define: Save a Python file with a
.pyextension. This file acts as a module, where the filename (without the extension) serves as the module name. - Package Initialization: To organize multiple modules into a package, place an
__init__.pyfile inside the directory. While implicit namespace packages exist, including this file explicitly marks a directory as a regular package and allows for package-level initialization.
Next, use the import statement to access the module and its functionality within your project.
Save the below code as a file named math_operations.py. Note the use of type hints, which is standard for modern Python modules:
# math_operations.py
def add(x: float, y: float) -> float:
return x + y
def subtract(x: float, y: float) -> float:
return x - y
def multiply(x: float, y: float) -> float:
return x * y
def divide(x: float, y: float) -> float:
"""Returns the result of $\frac{x}{y}$"""
if y == 0:
raise ValueError("Cannot divide by zero.")
return x / yYou can utilize the math_operations module by using import as shown below:
import math_operations
# Access functions using the module prefix
result = math_operations.add(4, 5)
print(result) # Output: 9.0
result = math_operations.divide(10, 5)
print(result) # Output: 2.0Alternatively, use the from syntax to import specific members directly into the local namespace:
from math_operations import add, subtract
result = add(3, 2)
print(result) # Output: 5.0To ensure the module is professional and reusable, follow these Best Practices:
- Namespace Safety: Avoid using
from module import *as it can lead to name collisions and reduces code readability (making it difficult for linters and IDEs to track origins). - Execution Guard: Use the
__name__global variable to prevent code from executing automatically when the module is imported.
def main():
# Logic for manual testing or CLI usage
print("Testing add function:", add(10, 20))
if __name__ == "__main__":
main()This ensures that the block following if __name__ == "__main__": only runs when the script is executed directly, keeping the module "clean" for external imports.
Explore all 100 answers here π Devinterview.io - Python