Basic Python Interview Questions
1. What are Python’s key features?
Easy to Learn and Use
Python has a simple syntax that closely resembles natural language, making it ideal for beginners.Interpreted Language
Python code is executed line-by-line, which makes debugging easier and development faster.High-Level Language
You don’t need to manage memory or deal with low-level operations—Python handles that for you.Dynamically Typed
You don’t need to declare variable types explicitly; Python infers them at runtime.
2. What is the difference between list, tuple, and set?
Feature | List | Tuple | Set |
Mutable | ✅ Yes | ❌ No | ✅ Yes |
Ordered | ✅ Yes | ✅ Yes | ❌ No |
Duplicates | ✅ Allowed | ✅ Allowed | ❌ Not Allowed |
Syntax | [ ] | ( ) | { } / set() |
3. Explain mutable vs immutable types in Python with examples.
Mutable Types
These can be changed after creation. You can modify, add, or remove elements.
Examples:
List
Set
Dictionary
Example with a List:
🔒 Immutable Types
These cannot be changed after creation. Any operation that seems to modify them actually creates a new object.
Examples:
Tuple
String
Integer
Float
Boolean
Example with a String:
🔍 Why It Matters
Performance: Immutable types are faster and safer for use in multi-threaded environments.
Hashing: Only immutable types can be used as dictionary keys or set elements.
Debugging: Mutable types can lead to unexpected behavior if modified unintentionally.
4. What are Python’s built-in data types?
Python has a rich set of built-in data types that fall into several categories. Here's a structured overview:
🔢 Numeric Types
Used to store numbers.
int – Integer values
float – Floating-point numbers
complex – Complex numbers
🔤 Text Type
Used to store textual data.
str – String
📦 Sequence Types
Ordered collections of items.
list – Mutable sequence
tuple – Immutable sequence
range – Immutable sequence of numbers
🧩 Set Types
Unordered collections of unique items.
set – Mutable set
frozenset – Immutable set
🗂️ Mapping Type
Key-value pairs.
dict – Dictionary
✅ Boolean Type
Represents truth values.
bool – True or False
🧼 Binary Types
Used for binary data.
bytes – Immutable binary sequences
bytearray – Mutable binary sequences
memoryview – Memory view object
5. Difference between is and == in Python?
In Python, is and == are both used for comparison, but they check different things:
✅ == (Equality Operator)
Checks if two variables have the same value.
It compares the contents of the objects.
🧠 is (Identity Operator)
Checks if two variables point to the same object in memory.
It compares the identity (memory address) of the objects.
🔍 Example with Immutable Types:
But for small integers and strings, Python caches them, so:
🧪 Summary
Operator | Compares | Returns True When... |
== | Values | The contents of the objects are equal |
is | Identity (memory) | Both variables point to the same object |
6. What are Python decorators and where do you use them?
In Python, decorators are a powerful feature that allows you to modify or enhance the behavior of functions or classes without changing their actual code.
🎯 What is a Decorator?
A decorator is a function that takes another function as input and returns a new function with added functionality.
Basic Syntax:
This is equivalent to:
def my_decorator(func):
def wrapper():
print("Before the function runs")
func()
print("After the function runs")
return wrapper
@my_decorator
def say_hello():
print("Hello!")
say_hello()
🧪 Example: Logging Decorator
def log(func):
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__} with {args} and {kwargs}")
return func(*args, **kwargs)
return wrapper
🛠️ Where Are Decorators Used?
✅ Common Use Cases:
Logging
Access control / authentication
Timing / performance measurement
Memoization (caching results)
Validation
Flask/Django route handling
🔌 Built-in Decorators in Python
@staticmethod – Defines a static method in a class.
@classmethod – Defines a method that receives the class as the first argument.
@property – Makes a method behave like an attribute.
🧠 Advanced: Decorators with Arguments
You can also create decorators that accept arguments by nesting functions.
Here’s an example of a Python decorator with arguments, which allows you to customize the behavior of the decorator itself:
🧪 Example: Decorator that repeats a function multiple times
🧾 Output:
Hello, Sireesh!
Hello, Sireesh!
Hello, Sireesh!
🔍 How It Works:
repeat(n) is a decorator factory that takes an argument n.
It returns a decorator, which wraps the original function.
The wrapper function calls the original function n times.
7. Explain list comprehension with an example.
List comprehension in Python is a concise way to create lists using a single line of code. It’s more readable and often more efficient than using traditional loops.
✅ Basic Syntax
expression: the value to include in the list
item: the variable representing each element in the iterable
iterable: a sequence (like a list, range, etc.)
condition (optional): filters which items to include
🧪 Example 1: Square Numbers
squares = [x**2 for x in range(5)]
print(squares) # Output: [0, 1, 4, 9, 16]
🧪 Example 2: Filter Even Numbers
evens = [x for x in range(10) if x % 2 == 0]
print(evens) # Output: [0, 2, 4, 6, 8]
🧪 Example 3: Convert Strings to Uppercase
names = ["sireesh", "vidyavathi", "muthusamy"]
upper_names = [name.upper() for name in names]
print(upper_names) # Output: ['SIREESH', 'VIDYAVATHI', 'MUTHUSAMY']
🧪 Example 4: Nested List Comprehension
matrix = [[i * j for j in range(1, 4)] for i in range(1, 4)]
print(matrix)
# Output: [[1, 2, 3], [2, 4, 6], [3, 6, 9]]
Would you like to see how list comprehensions compare to regular for loops in terms of performance or readability?
8. How does Python handle memory management?
Python handles memory management through a combination of automatic mechanisms and internal optimizations. Here's a breakdown of how it works:
🧠 1. Automatic Memory Management
Python uses a built-in garbage collector to manage memory. You don’t need to manually allocate or free memory.
✅ Key Components:
Reference Counting: Every object keeps track of how many references point to it. When the count drops to zero, the object is deleted.
Garbage Collection: Handles cyclic references (e.g., objects referencing each other) that reference counting alone can't clean up.
📦 2. Object-Specific Memory Allocation
Python uses a private heap space to store objects and data structures. This is managed by the Python memory manager.
Small objects (like integers and strings) are often interned or pooled for reuse.
Large objects are handled by the underlying C memory allocator.
🔄 3. Memory Optimization Techniques
Interning: Python reuses immutable objects like small integers and strings to save memory.
Slots: You can use __slots__ in classes to reduce memory usage by avoiding dynamic attribute dictionaries.
🧪 4. Tools to Monitor Memory
sys.getsizeof() – Check memory size of an object.
gc module – Inspect and control garbage collection.
memory_profiler – External tool to profile memory usage line-by-line.
🔍 Example: Reference Counting
Would you like to see how memory usage differs between lists, tuples, and sets, or how to optimize memory in large-scale Python applications?
9. What are Python namespaces?
In Python, a namespace is a container that holds names (identifiers) and maps them to objects. It helps avoid naming conflicts by organizing and managing variable/function/class names in different scopes.
🧠 Why Namespaces Matter
They ensure that names are unique and don’t clash with each other, especially in large programs or when using libraries.
🗂️ Types of Namespaces in Python
1. Built-in Namespace
Contains names like print(), len(), int(), etc.
Automatically available in any Python program.
2. Global Namespace
Includes names defined at the top level of a script or module.
Accessible throughout the module.
3. Local Namespace
Created inside functions and holds names defined within that function.
Exists only during function execution.
4. Enclosing Namespace (for nested functions)
Refers to the namespace of the outer function in a nested function scenario.
🔍 Scope vs Namespace
Namespace: Mapping of names to objects.
Scope: The region of code where a namespace is accessible.
🧪 Example: Using globals() and locals()
x = 42
def test():
y = 99
print("Local:", locals())
print("Global:", globals()["x"])
test()
Would you like a diagram showing how these namespaces interact or a real-world example using nested functions and closures?
10. Explain Python’s with statement and context managers.
In Python, the with statement is used to wrap the execution of a block of code within methods defined by a context manager. It simplifies resource management like opening and closing files, acquiring and releasing locks, or connecting and disconnecting from databases.
🔹 What is a Context Manager?
A context manager is an object that defines the runtime context to be established when executing a with statement. It handles setup and teardown actions using two special methods:
__enter__(): Sets up the context and returns the resource.
__exit__(self, exc_type, exc_value, traceback): Cleans up the context, even if an exception occurred.
🔹 Basic Example: File Handling
Python
with open('example.txt', 'r') as file:
content = file.read()
# File is automatically closed after the block
This is equivalent to:
Python
file = open('example.txt', 'r')
try:
content = file.read()
finally:
file.close()
🔹 Custom Context Manager Using a Class
Python
class MyContext:
def __enter__(self):
print("Entering context")
return self
def __exit__(self, exc_type, exc_value, traceback):
print("Exiting context")
with MyContext():
print("Inside the context")
🔹 Using contextlib for Simpler Context Managers
Python’s contextlib module provides a decorator to create context managers using generator functions:
Python
from contextlib import contextmanager
@contextmanager
def my_context():
print("Setup")
yield
print("Teardown")
with my_context():
print("Doing work")
🔹 Why Use with and Context Managers?
Cleaner code: No need to explicitly release resources.
Exception safety: Ensures proper cleanup even if errors occur.
Readability: Makes the intent of resource management clear.
🔹 Intermediate Python Interview Questions
1. Explain Python’s Global Interpreter Lock (GIL).
Python’s Global Interpreter Lock (GIL) is a mechanism that prevents multiple native threads from executing Python bytecodes at once. It’s a well-known limitation of the CPython interpreter (the standard and most widely used implementation of Python).
🔹 Why Does the GIL Exist?
The GIL was introduced to simplify memory management in CPython. Python uses reference counting for garbage collection, and the GIL ensures that memory operations (like incrementing or decrementing reference counts) are thread-safe without requiring complex locking mechanisms.
🔹 How the GIL Affects Multithreading
Only one thread executes Python code at a time, even on multi-core systems.
This means CPU-bound multithreaded programs don’t get true parallelism.
However, I/O-bound programs (e.g., network requests, file operations) can still benefit from multithreading because the GIL is released during I/O operations.
🔹 Example: CPU-bound vs I/O-bound
CPU-bound (limited by GIL):
Python
import threading
def compute():
for _ in range(10**7):
pass
threads = [threading.Thread(target=compute) for _ in range(4)]
for t in threads: t.start()
for t in threads: t.join()
This won’t run faster on multiple cores due to the GIL.
I/O-bound (benefits from threading):
Python
import threading
import time
def wait():
time.sleep(2)
threads = [threading.Thread(target=wait) for _ in range(4)]
for t in threads: t.start()
for t in threads: t.join()
This will run concurrently and finish in ~2 seconds.
🔹 Alternatives to Bypass the GIL
Multiprocessing:
Uses separate processes instead of threads.
Each process has its own Python interpreter and memory space.
True parallelism for CPU-bound tasks.
Python
from multiprocessing import Process
def compute():
for _ in range(10**7):
pass
processes = [Process(target=compute) for _ in range(4)]
for p in processes: p.start()
for p in processes: p.join()
Alternative Python Interpreters:
Jython (Python on JVM) and IronPython (Python on .NET) don’t have a GIL.
PyPy has a GIL but may offer performance improvements due to its JIT compiler.
🔹 Summary
Aspect | GIL Impact |
Multithreading | Limited for CPU-bound tasks |
Multiprocessing | True parallelism |
I/O-bound operations | Can benefit from threads |
Memory safety | Simplified by GIL |
Would you like help rewriting a CPU-bound task to use multiprocessing or explore how async programming compares with threading in Python?
2. How do @staticmethod, @classmethod, and instance methods differ?
In Python, methods in a class can be categorized into instance methods, class methods, and static methods, each serving a different purpose and having different behavior regarding how they access data.
🔹 1. Instance Methods
Most common type of method.
Takes self as the first parameter.
Can access and modify instance attributes and class attributes.
Python
class MyClass:
def __init__(self, value):
self.value = value
def show(self):
print(f"Value is {self.value}")
✅ Use when the method needs to access or modify object state.
🔹 2. Class Methods (@classmethod)
Takes cls as the first parameter (refers to the class, not the instance).
Can access and modify class-level data, but not instance-specific data.
Defined using the @classmethod decorator.
Python
class MyClass:
count = 0
def __init__(self):
MyClass.count += 1
@classmethod
def get_count(cls):
return cls.count
✅ Use when the method needs to access or modify class state.
🔹 3. Static Methods (@staticmethod)
Doesn’t take self or cls as the first parameter.
Behaves like a regular function but lives in the class’s namespace.
Defined using the @staticmethod decorator.
Python
class MyClass:
@staticmethod
def add(x, y):
return x + y
✅ Use when the method doesn’t need access to instance or class data, but logically belongs to the class.
🔸 Summary Table
Feature | Instance Method | Class Method | Static Method |
First argument | self | cls | None |
Access instance? | ✅ Yes | ❌ No | ❌ No |
Access class? | ✅ Yes | ✅ Yes | ❌ No |
Use case | Object behavior | Factory methods, class state | Utility functions |
Would you like a real-world example showing when to use each of these in a practical class design?
3. What is the difference between shallow copy and deep copy?
In Python, shallow copy and deep copy refer to different ways of copying objects, especially compound objects like lists, dictionaries, or custom classes that contain other objects.
🔹 Shallow Copy
A shallow copy creates a new object, but does not recursively copy the objects contained within. Instead, it copies references to the original nested objects.
Example:
Python
import copy
original = [[1, 2], [3, 4]]
shallow = copy.copy(original)
shallow[0][0] = 99
print(original) # Output: [[99, 2], [3, 4]]
✅ The outer list is copied, but the inner lists are shared between original and shallow.
🔹 Deep Copy
A deep copy creates a new object and recursively copies all nested objects, so the new object is completely independent of the original.
Example:
Python
import copy
original = [[1, 2], [3, 4]]
deep = copy.deepcopy(original)
deep[0][0] = 99
print(original) # Output: [[1, 2], [3, 4]]
✅ Both the outer and inner lists are copied, so changes to deep don’t affect original.
🔸 Summary Table
Feature | Shallow Copy | Deep Copy |
Copies outer object | ✅ Yes | ✅ Yes |
Copies nested objects | ❌ No (references only) | ✅ Yes (recursively) |
Independence | ❌ Partial | ✅ Full |
Use case | When nested objects should be shared | When full isolation is needed |
Would you like a visual diagram or code example using custom classes to illustrate this further?
4. How is exception handling done in Python?
In Python, exception handling is done using the try...except block, which allows you to gracefully handle errors that occur during program execution, rather than letting the program crash.
🔹 Basic Syntax
Python
try:
# Code that might raise an exception
risky_operation()
except SomeException:
# Code that runs if an exception occurs
handle_error()
🔹 Example
Python
try:
x = int(input("Enter a number: "))
result = 10 / x
except ValueError:
print("That's not a valid number!")
except ZeroDivisionError:
print("You can't divide by zero!")
else:
print("Result is:", result)
finally:
print("This always runs.")
🔸 Explanation of Keywords
Keyword | Purpose |
try | Wraps code that might raise an exception. |
except | Catches and handles specific exceptions. |
else | Runs if no exception occurs in the try block. |
finally | Always runs, whether an exception occurred or not (used for cleanup). |
🔹 Catching Multiple Exceptions
Python
try:
# some code
except (TypeError, ValueError) as e:
print("Caught an error:", e)
🔹 Raising Exceptions Manually
Python
def divide(a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
🔹 Custom Exceptions
You can define your own exceptions by subclassing Exception:
Python
class MyCustomError(Exception):
pass
try:
raise MyCustomError("Something went wrong!")
except MyCustomError as e:
print(e)
Would you like a visual flowchart of how exception handling works, or a real-world example like file handling or API calls with error management?
5. What are Python’s iterators and generators? Give an example of a generator function.
In Python, iterators and generators are tools for working with sequences of data, especially when you want to process items one at a time and on demand, rather than all at once.
🔹 Iterators
An iterator is any object that implements two methods:
__iter__() – returns the iterator object itself.
__next__() – returns the next item or raises StopIteration when done.
Example:
Python
nums = [1, 2, 3]
it = iter(nums)
print(next(it)) # 1
print(next(it)) # 2
print(next(it)) # 3
# next(it) would raise StopIteration
🔹 Generators
A generator is a simpler way to create iterators using a function with yield instead of return. Each time yield is called, the function’s state is saved, and it resumes from there on the next call.
✅ Generator Function Example:
Python
def countdown(n):
while n > 0:
yield n
n -= 1
for num in countdown(5):
print(num)
Output:
5
4
3
2
1
🔸 Key Differences
Feature | Iterator | Generator |
Definition | Class with __iter__ and __next__ | Function with yield |
Memory usage | Can be high (if not optimized) | Very memory-efficient |
Syntax | More verbose | Concise and readable |
Use case | Custom iteration logic | Lazy evaluation of sequences |
Would you like an example of a generator that reads large files line by line or generates an infinite sequence like Fibonacci numbers?
6. What is monkey patching in Python?
Monkey patching in Python refers to the dynamic modification of a class or module at runtime. It allows you to change or extend the behavior of libraries, classes, or objects without altering the original source code.
🔹 When Is Monkey Patching Used?
To fix bugs in third-party libraries.
To add logging or debugging features.
To override behavior for testing purposes.
To inject custom behavior into existing code.
🔹 Example: Monkey Patching a Class Method
Python
class Dog:
def bark(self):
print("Woof!")
# Original behavior
dog = Dog()
dog.bark() # Output: Woof!
# Monkey patching the bark method
def new_bark(self):
print("Meow?")
Dog.bark = new_bark
# Modified behavior
dog.bark() # Output: Meow?
Show more lines
Here, we replaced the bark method of the Dog class with a new function new_bark at runtime.
🔹 Monkey Patching a Module Function
Python
import time
# Original sleep function
print("Sleeping...")
time.sleep(2)
print("Awake!")
# Monkey patching time.sleep to do nothing
time.sleep = lambda x: None
print("Sleeping...")
time.sleep(2) # Does nothing
print("Awake immediately!")
🔸 Pros and Cons
Pros | Cons |
Quick fixes without changing source | Can lead to hard-to-debug code |
Useful for testing/mocking | Breaks encapsulation |
Adds flexibility | May cause unexpected behavior |
⚠️ Caution
Monkey patching should be used sparingly and carefully, especially in production code. It can make code harder to understand, maintain, and debug.
7. Difference between args and kwargs.
In Python, *args and **kwargs are used in function definitions to allow variable numbers of arguments to be passed.
🔹 *args (Non-keyword Arguments)
Stands for "arguments".
Allows a function to accept any number of positional arguments.
Inside the function, args is a tuple.
Example:
Python
def add(*args):
return sum(args)
print(add(1, 2, 3)) # Output: 6
🔹 **kwargs (Keyword Arguments)
Stands for "keyword arguments".
Allows a function to accept any number of named arguments.
Inside the function, kwargs is a dictionary.
Example:
Python
def greet(**kwargs):
for key, value in kwargs.items():
print(f"{key} = {value}")
greet(name="Alice", age=30)
# Output:
# name = Alice
# age = 30
🔸 Combined Usage
You can use both in the same function, but *args must come before **kwargs.
Python
def demo(a, *args, **kwargs):
print("a =", a)
print("args =", args)
print("kwargs =", kwargs)
demo(1, 2, 3, x=10, y=20)
# Output:
# a = 1
# args = (2, 3)
# kwargs = {'x': 10, 'y': 20}
🔸 Summary Table
Feature | *args | **kwargs |
Type | Tuple | Dictionary |
Use case | Variable positional arguments | Variable keyword arguments |
Syntax | *args | **kwargs |
Would you like to see how *args and **kwargs are used in decorators or class constructors?
8. Explain how Python’s garbage collector works.
Python’s garbage collector is a built-in mechanism that automatically manages memory by reclaiming unused objects, helping prevent memory leaks and freeing developers from manual memory management.
🔹 How It Works
Python primarily uses two techniques for garbage collection:
1. Reference Counting
Every object in Python has a reference count: the number of references pointing to it.
When the reference count drops to zero, the object is immediately destroyed.
Python
a = [1, 2, 3] # reference count = 1
b = a # reference count = 2
del a # reference count = 1
del b # reference count = 0 → object is deleted
2. Cycle Detection (Generational GC)
Reference counting alone can’t handle circular references (e.g., two objects referencing each other).
Python’s gc module uses a generational garbage collector to detect and clean up cycles.
🔹 Generational Garbage Collection
Python divides objects into three generations:
Generation 0: Newly created objects.
Generation 1: Objects that survived one garbage collection.
Generation 2: Long-lived objects.
The idea is that most objects die young, so Python collects Generation 0 frequently and Generation 2 less often.
🔹 Using the gc Module
You can interact with the garbage collector using the gc module:
Python
import gc
gc.collect() # Manually trigger garbage collection
gc.get_threshold() # View collection thresholds
gc.get_objects() # List all tracked objects
🔸 Summary
Feature | Description |
Reference Counting | Immediate cleanup when count hits zero |
Cycle Detection | Handles circular references |
Generational GC | Optimizes performance by grouping objects |
Manual Control | Via gc module for advanced use cases |
Would you like a diagram showing how reference counting and cycle detection work together, or an example of a circular reference and how Python handles it?
9. What are metaclasses in Python?
In Python, a metaclass is a class of a class—it defines how classes behave. Just as classes define how objects behave, metaclasses define how classes themselves are constructed.
🔹 What Is a Metaclass?
When you create a class in Python, it is itself an instance of a metaclass.
By default, all classes are instances of the built-in type metaclass.
You can define a custom metaclass to control class creation, modify attributes, enforce coding standards, or inject behavior.
🔹 Basic Example
Python
# Custom metaclass
class MyMeta(type):
def __new__(cls, name, bases, dct):
print(f"Creating class {name}")
return super().__new__(cls, name, bases, dct)
# Using the metaclass
class MyClass(metaclass=MyMeta):
pass
Output:
Creating class MyClass
Here, MyMeta intercepts the creation of MyClass.
🔹 Why Use Metaclasses?
Validation: Enforce rules on class definitions.
Auto-modification: Add or change methods/attributes dynamically.
Singleton pattern: Control instantiation to ensure only one instance.
ORMs and frameworks: Django uses metaclasses to define models.
🔹 Anatomy of a Metaclass
A metaclass typically overrides:
__new__(cls, name, bases, dct): Controls class creation.
__init__(cls, name, bases, dct): Optional, for post-processing.
🔸 Summary
Concept | Description |
Class | Blueprint for objects |
Metaclass | Blueprint for classes |
Default | type is the default metaclass |
Use cases | Validation, modification, design patterns |
Would you like a practical example of using a metaclass to enforce method naming conventions or implement a singleton pattern?
10. How do you achieve multithreading vs multiprocessing in Python?
In Python, multithreading and multiprocessing are two ways to achieve concurrency, but they work very differently under the hood and are suited for different types of tasks.
🔹 Multithreading
Uses the threading module.
Runs multiple threads within the same process.
Threads share the same memory space.
Limited by the Global Interpreter Lock (GIL) in CPython, so only one thread executes Python bytecode at a time.
Best for I/O-bound tasks (e.g., file I/O, network requests).
✅ Example:
Python
import threading
def task():
print("Running in thread")
t1 = threading.Thread(target=task)
t1.start()
t1.join()
🔹 Multiprocessing
Uses the multiprocessing module.
Runs multiple processes, each with its own Python interpreter and memory space.
Not affected by the GIL → allows true parallelism.
Best for CPU-bound tasks (e.g., data processing, computation-heavy tasks).
✅ Example:
Python
import multiprocessing
def task():
print("Running in process")
p1 = multiprocessing.Process(target=task)
p1.start()
p1.join()
🔸 Key Differences
Feature | Multithreading | Multiprocessing |
Concurrency type | Threads (shared memory) | Processes (separate memory) |
GIL impact | Yes (limits CPU-bound tasks) | No (true parallelism) |
Best for | I/O-bound tasks | CPU-bound tasks |
Memory usage | Lower | Higher |
Communication | Shared memory, locks | Queues, pipes |
🔹 When to Use What?
Use multithreading when your program spends a lot of time waiting (e.g., downloading files, reading from disk).
Use multiprocessing when your program needs to do heavy computation (e.g., image processing, data analysis).
Would you like a combined example that compares both approaches side by side for a real-world task like downloading and processing images?
---
🔹 Advanced Python Interview Questions
1. Explain how Python’s memory is managed internally (heap, reference counting, garbage collection).
Python’s memory management is a sophisticated system that combines heap allocation, reference counting, and garbage collection to efficiently manage memory usage during program execution.
🔹 1. Heap Memory Allocation
All Python objects and data structures are stored in a private heap.
This heap is managed by the Python memory manager, which handles allocation and deallocation of memory.
The memory manager works with the object allocator and garbage collector to optimize performance.
🔹 2. Reference Counting
Every Python object has an internal counter that tracks how many references point to it.
When the reference count drops to zero, the object is immediately deallocated.
✅ Example:
Python
a = [1, 2, 3] # ref count = 1
b = a # ref count = 2
del a # ref count = 1
del b # ref count = 0 → object is deleted
This is the primary memory management technique in CPython (the standard Python implementation).
🔹 3. Garbage Collection (GC)
Reference counting alone can’t handle circular references (e.g., two objects referencing each other).
Python uses a cyclic garbage collector to detect and clean up these cycles.
The gc module provides access to this system.
✅ Example of Circular Reference:
Python
import gc
class A:
def __init__(self):
self.b = None
class B:
def __init__(self):
self.a = None
a = A()
b = B()
a.b = b
b.a = a
del a
del b
gc.collect() # Cleans up the cycle
Show more lines
🔹 4. Generational Garbage Collection
Python’s GC divides objects into three generations:
Generation 0: Newly created objects.
Generation 1: Survived one GC cycle.
Generation 2: Long-lived objects.
The idea is that most objects die young, so Python collects Generation 0 frequently and Generation 2 less often.
🔸 Summary Table
Component | Role |
Heap Memory | Stores all Python objects |
Reference Counting | Immediate cleanup when no references remain |
Garbage Collector | Detects and removes circular references |
Generational GC | Optimizes performance by grouping objects |
Would you like a diagram showing how these components interact, or a deeper dive into how memory leaks can occur and how to prevent them in Python?
2. What are descriptors in Python and how do they work?
In Python, descriptors are a powerful feature that allow you to customize how attribute access works in classes. They are the foundation of many built-in features like @property, static methods, class methods, and even ORM field definitions in frameworks like Django.
🔹 What Is a Descriptor?
A descriptor is any object that defines one or more of the following methods:
__get__(self, instance, owner) – called when the attribute is accessed.
__set__(self, instance, value) – called when the attribute is assigned a value.
__delete__(self, instance) – called when the attribute is deleted.
If an object defines any of these methods, it becomes a descriptor.
🔹 How Descriptors Work
Descriptors are used when they are assigned as class attributes. Python’s attribute lookup mechanism checks if the attribute is a descriptor and, if so, delegates the operation to the descriptor’s method.
✅ Example: Custom Descriptor
Python
class UpperCaseDescriptor:
def __get__(self, instance, owner):
return instance._name.upper()
def __set__(self, instance, value):
instance._name = value
class Person:
name = UpperCaseDescriptor()
def __init__(self, name):
self.name = name
p = Person("sireesh")
print(p.name) # Output: SIREESH
Show more lines
Here, name is managed by UpperCaseDescriptor, which transforms the value to uppercase when accessed.
🔸 Types of Descriptors
Type | Methods Implemented | Behavior |
Data Descriptor | __get__ and __set__ | Takes precedence over instance attributes |
Non-data Descriptor | Only __get__ | Used if no instance attribute shadows it |
🔹 Built-in Uses of Descriptors
@property → creates a managed attribute.
staticmethod, classmethod → use descriptors internally.
Django ORM fields → use descriptors to manage database access.
🔸 Summary
Feature | Descriptor |
Purpose | Customize attribute access |
Key methods | __get__, __set__, __delete__ |
Use cases | Properties, validation, computed fields |
Built-in support | @property, staticmethod, classmethod |
Would you like to see how descriptors compare with @property, or how they’re used in real-world frameworks like Django or SQLAlchemy?
3. How would you implement a singleton in Python?
In Python, a singleton is a design pattern that ensures a class has only one instance throughout the lifetime of a program. There are several ways to implement a singleton, each with its own trade-offs.
✅ Method 1: Using a Class Variable
Python
class Singleton:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super(Singleton, cls).__new__(cls)
return cls._instance
# Usage
s1 = Singleton()
s2 = Singleton()
print(s1 is s2) # True
__new__ ensures only one instance is created.
All subsequent calls return the same object.
✅ Method 2: Using a Decorator
Python
def singleton(cls):
instances = {}
def get_instance(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance
@singleton
class MyClass:
pass
a = MyClass()
b = MyClass()
print(a is b) # True
Show more lines
This wraps the class in a function that controls instantiation.
✅ Method 3: Using a Metaclass
Python
class SingletonMeta(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(SingletonMeta, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class MyClass(metaclass=SingletonMeta):
pass
x = MyClass()
y = MyClass()
print(x is y) # True
This is a more advanced and flexible approach.
Useful when you want to apply singleton behavior across multiple classes.
🔸 Summary
Method | Pros | Cons |
Class variable | Simple and readable | Less flexible |
Decorator | Clean and reusable | May obscure class definition |
Metaclass | Powerful and scalable | More complex |
Would you like to see how to implement a thread-safe singleton or use it in a real-world scenario like a configuration manager or database connection pool?
4. Difference between @property and a normal method.
In Python, @property and normal methods are both ways to define behavior in a class, but they serve different purposes and are used differently.
🔹 @property: Managed Attribute Access
Converts a method into a read-only attribute.
Allows you to access a method like an attribute, without parentheses.
Commonly used for computed attributes or to encapsulate internal data.
✅ Example:
Python
class Circle:
def __init__(self, radius):
self._radius = radius
@property
def area(self):
return 3.14 * self._radius ** 2
c = Circle(5)
print(c.area) # Accessed like an attribute, not a method
🔹 Normal Method
Requires explicit invocation with parentheses.
Used for actions or behaviors, not for representing data.
✅ Example:
Python
class Circle:
def __init__(self, radius):
self._radius = radius
def area(self):
return 3.14 * self._radius ** 2
c = Circle(5)
print(c.area()) # Must use parentheses
🔸 Key Differences
Feature | @property | Normal Method |
Access syntax | obj.attribute | obj.method() |
Purpose | Represent computed attributes | Perform actions or calculations |
Use case | Encapsulation, read-only access | General behavior |
Can be set? | Only with @property.setter | Not applicable |
🔹 Bonus: @property with Setter
You can also define a setter to make the property writable:
Python
class Circle:
def __init__(self, radius):
self._radius = radius
@property
def radius(self):
return self._radius
@radius.setter
def radius(self, value):
if value < 0:
raise ValueError("Radius must be positive")
self._radius = value
Would you like a visual diagram comparing how @property and normal methods behave in memory or a real-world example like a temperature converter class using properties?
5. Explain async/await in Python. How does it differ from threads?
In Python, async/await is used for asynchronous programming, allowing you to write code that performs non-blocking I/O operations efficiently. It’s different from multithreading, which uses threads to achieve concurrency.
🔹 What Is async/await?
Introduced in Python 3.5+
Built on coroutines and the asyncio library.
Allows you to pause and resume functions without blocking the main thread.
Ideal for I/O-bound tasks like network requests, file operations, or database queries.
✅ Example: Asynchronous Function
Python
import asyncio
async def fetch_data():
print("Start fetching...")
await asyncio.sleep(2)
print("Done fetching!")
return "Data"
async def main():
result = await fetch_data()
print(result)
asyncio.run(main())
async def defines a coroutine.
await pauses execution until the awaited task completes.
asyncio.run() starts the event loop.
🔸 How It Differs from Threads
Feature | async/await (AsyncIO) | Threads (threading) |
Concurrency model | Cooperative multitasking | Preemptive multitasking |
Execution | Single-threaded event loop | Multiple threads |
Best for | I/O-bound tasks | I/O-bound (some CPU-bound) |
Memory usage | Low | Higher |
GIL impact | Not affected (no true parallelism) | Affected (only one thread runs Python code at a time) |
Complexity | Requires async-compatible libraries | Easier to use for simple tasks |
🔹 When to Use What?
Use async/await when:
You’re dealing with many I/O operations (e.g., web scraping, API calls).
You want lightweight concurrency without threads.
Use threads when:
You need to run blocking code concurrently.
You’re working with libraries that don’t support async.
Would you like a side-by-side example comparing async and threading for a task like downloading multiple web pages?
6. How do you optimize performance in Python?
Optimizing performance in Python involves a combination of code-level improvements, efficient data structures, and system-level strategies. Here's a breakdown of key techniques:
🔹 1. Use Built-in Functions and Libraries
Python’s built-in functions (like sum(), map(), filter(), sorted()) are implemented in C and are highly optimized.
Use libraries like NumPy, Pandas, and collections for efficient data handling.
🔹 2. Choose the Right Data Structures
Use set for fast membership tests.
Use deque from collections for fast appends/pops from both ends.
Use defaultdict or Counter for counting and grouping.
🔹 3. Avoid Unnecessary Loops and Computations
Use list comprehensions instead of manual loops:
Python
squares = [x*x for x in range(10)]
Cache results using memoization or functools.lru_cache.
🔹 4. Use Generators for Large Data
Generators (yield) are memory-efficient for large datasets.
Python
def read_large_file():
with open('big.txt') as f:
for line in f:
yield line
🔹 5. Profile and Benchmark Your Code
Use timeit for benchmarking:
Python
import timeit
timeit.timeit('sum(range(100))', number=1000)
Use cProfile or line_profiler to find bottlenecks.
🔹 6. Parallelism and Concurrency
Use multiprocessing for CPU-bound tasks.
Use asyncio or threading for I/O-bound tasks.
🔹 7. Avoid Global Variables
Global variables slow down access due to scope resolution.
Prefer local variables inside functions.
🔹 8. Use Efficient String Operations
Avoid repeated string concatenation with + in loops.
Use str.join() for combining strings:
Python
''.join(list_of_strings)
🔹 9. Use Just-in-Time Compilation (JIT)
Use PyPy instead of CPython for faster execution in many cases.
PyPy includes a JIT compiler that can significantly speed up Python code.
🔹 10. Avoid Memory Leaks
Use weak references (weakref) for large objects.
Monitor memory usage with tools like tracemalloc.
🔸 Summary Table
Technique | Benefit |
Built-in functions | Fast and optimized |
Right data structures | Efficient memory and speed |
Generators | Low memory usage |
Profiling tools | Identify bottlenecks |
Multiprocessing/AsyncIO | Concurrency and parallelism |
PyPy | Faster execution |
Would you like help profiling a specific piece of code or optimizing a real-world Python script you're working on?
7. What is the difference between deepcopy, pickle, and copyreg?
In Python, deepcopy, pickle, and copyreg are all related to object copying and serialization, but they serve different purposes and operate in distinct ways.
🔹 1. deepcopy (from copy module)
Creates a new object that is a deep copy of the original.
Recursively copies all nested objects.
Used when you want a completely independent clone of a complex object.
✅ Example:
Python
import copy
original = {'a': [1, 2], 'b': [3, 4]}
cloned = copy.deepcopy(original)
cloned['a'][0] = 99
print(original['a'][0]) # Output: 1 (unchanged)
🔹 2. pickle (Serialization)
Converts Python objects into a byte stream (serialization) and back (deserialization).
Used for saving objects to disk, sending over a network, or caching.
Doesn’t copy objects in memory—it stores and restores them.
✅ Example:
Python
import pickle
data = {'x': 42, 'y': [1, 2, 3]}
with open('data.pkl', 'wb') as f:
pickle.dump(data, f)
with open('data.pkl', 'rb') as f:
loaded = pickle.load(f)
print(loaded) # Output: {'x': 42, 'y': [1, 2, 3]}
🔹 3. copyreg (Custom Pickling)
Used to register custom pickling behavior for complex or non-standard objects.
Helps pickle understand how to serialize/deserialize objects that don’t work out-of-the-box.
✅ Example:
Python
import copyreg
import pickle
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def pickle_point(point):
return Point, (point.x, point.y)
copyreg.pickle(Point, pickle_point)
p = Point(1, 2)
data = pickle.dumps(p)
restored = pickle.loads(data)
print(restored.x, restored.y) # Output: 1 2
Show more lines
🔸 Summary Table
Feature | deepcopy | pickle | copyreg |
Purpose | Clone objects in memory | Serialize/deserialize objects | Customize pickling for complex types |
Module | copy | pickle | copyreg |
Output | New object | Byte stream | Pickle-compatible instructions |
Use case | Avoid shared references | Save/load objects | Extend pickle for custom classes |
Would you like a visual diagram showing how these interact, or a real-world example like saving a machine learning model using pickle and customizing its serialization with copyreg?
8. Explain the difference between Python’s _new_ and _init_.
In Python, __new__ and __init__ are two special methods that play distinct roles in object creation and initialization.
🔹 __new__: Object Creation
__new__ is responsible for creating a new instance of a class.
It is called before __init__.
It returns the new object (usually an instance of the class).
It’s a static method and is rarely overridden unless you're working with immutable types (like tuple, str, int) or implementing design patterns like Singleton.
✅ Example:
Python
class MyClass:
def __new__(cls):
print("Creating instance")
return super(MyClass, cls).__new__(cls)
def __init__(self):
print("Initializing instance")
obj = MyClass()
Output:
Creating instance
Initializing instance
🔹 __init__: Object Initialization
__init__ is called after the object is created.
It initializes the object’s attributes.
It does not return anything (returns None).
✅ Example:
Python
class MyClass:
def __init__(self, value):
self.value = value
print("Initialized with value:", value)
obj = MyClass(10)
🔸 Key Differences
Feature | __new__ | __init__ |
Purpose | Creates the object | Initializes the object |
Called when | Before __init__ | After __new__ |
Returns | A new instance | Nothing (None) |
Common use case | Immutable types, Singleton pattern | Setting up instance attributes |
🔹 Real-World Use Case: Singleton with __new__
Python
class Singleton:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
print("Initializing Singleton")
a = Singleton()
b = Singleton()
print(a is b) # True
Show more lines
Would you like a visual diagram showing the flow from __new__ to __init__, or an example using immutable types like str or tuple?
9. How do you secure Python code from reverse engineering?
Securing Python code from reverse engineering is important when you're distributing applications or libraries and want to protect intellectual property, sensitive logic, or proprietary algorithms. Since Python is an interpreted language, its source code is often easy to access and understand—even when compiled into .pyc or packaged into executables.
Here are several strategies to help obfuscate and protect Python code:
🔹 1. Code Obfuscation
Use tools to rename variables, functions, and classes to meaningless names.
Makes the code harder to read but doesn’t prevent decompilation.
Tools:
pyarmor
python-obfuscator
Opy (Obfuscator for Python)
🔹 2. Compile to Bytecode
Python automatically compiles .py files to .pyc bytecode.
You can distribute only .pyc files, but they can still be decompiled.
Tools:
compileall module
Cython (compiles Python to C)
🔹 3. Use Cython for Compilation
Converts Python code to C and compiles it into a binary (.so or .pyd).
Much harder to reverse engineer than .pyc.
Example:
Shell
cython --embed -o mymodule.c mymodule.py
gcc -o mymodule mymodule.c $(python3-config --cflags --ldflags)
🔹 4. Package as Executable
Use tools like PyInstaller, cx_Freeze, or Nuitka to bundle your app into a standalone executable.
You can include compiled bytecode and native binaries.
Note:
This adds a layer of protection but doesn’t make it completely secure.
🔹 5. Encrypt Code with Runtime Decryption
Encrypt your Python files and decrypt them at runtime.
Requires a secure key management strategy.
Tool:
pyarmor supports runtime encryption and license control.
🔹 6. Use Licensing and Activation
Combine obfuscation with license checks to prevent unauthorized use.
Tools like pyarmor and nuitka support license enforcement.
🔹 7. Limit Exposure
Keep sensitive logic on the server (e.g., via APIs) instead of distributing it.
This is the most secure approach for critical business logic.
🔸 Summary Table
Technique | Protection Level | Notes |
Obfuscation | Low | Easy to bypass |
Bytecode distribution | Low | Can be decompiled |
Cython compilation | Medium-High | Converts to native code |
Executable packaging | Medium | Adds complexity |
Runtime encryption | High | Needs secure key handling |
Server-side logic | Very High | Best for critical code |
Would you like help using a specific tool like PyInstaller or Cython to protect a script, or a walkthrough on how to encrypt and license your Python code with PyArmor?
10. If Python is interpreted, why is .pyc file generated?
Even though Python is an interpreted language, it still generates .pyc files to optimize performance during execution.
🔹 Why .pyc Files Are Generated
When you run a Python script:
The Python interpreter compiles the .py source code into bytecode.
This bytecode is a lower-level, platform-independent representation of your code.
The bytecode is then executed by the Python Virtual Machine (PVM).
To avoid compiling the same .py file every time, Python stores the compiled bytecode in a .pyc file inside the __pycache__ directory.
🔸 Benefits of .pyc Files
Feature | Purpose |
Faster startup | Skips recompilation of .py files |
Caching | Stores compiled bytecode for reuse |
Optimization | Improves performance for large projects |
🔹 Key Points
.pyc files are not machine code; they still require the Python interpreter to run.
They are not secure—they can be decompiled back to readable code.
Python checks the timestamp of the .py file to decide whether to regenerate the .pyc.
🔹 Related Tools
compileall module: Pre-compiles .py files to .pyc.
dis module: Disassembles bytecode for inspection.
Would you like to see how to manually compile .py files to .pyc, or how to inspect the bytecode using the dis module?


0 comments:
Post a Comment
Note: only a member of this blog may post a comment.