The mode 'r+'
opens a file for both reading and writing without deleting existing data.
However, the file must already exist; otherwise, it raises a FileNotFoundError
.
'w+'
also allows read/write but truncates the file (deletes previous content).'a+'
allows reading and appending but the file pointer starts at the end.'x+'
creates a new file for read/write, and fails if the file already exists.data.txt
after executing the following code?
with open("data.txt", "w") as f:
f.write("Hello\n")
f.write("Solviyo")
# Code that writes to file
with open("data.txt", "w") as f:
f.write("Hello\n")
f.write("Solviyo")
# After running, the file data.txt contains two lines:
# Line 1: Hello
# Line 2: Solviyo
Writing text to a file with newlines
write()
call writes "Hello\n"
(which adds a newline), and the second writes "Solviyo"
on the next line → the file contains two lines: Hello
and Solviyo
.\n
inside the file; in reality \n
is a newline character, not two characters backslash+n.f.write("Hello")
then f.write("Solviyo")
), but here a newline is present.Step-by-step reasoning:
open("data.txt", "w")
opens (or creates) the file in write mode and truncates existing content.with
ensures the file is closed automatically when the block ends.f.write("Hello\n")
writes the string "Hello" followed by a newline character, so the file now contains the first line Hello
and the file pointer is at the start of the second line.f.write("Solviyo")
writes "Solviyo" starting at the second line; no extra newline is added after this call.Hello
Solviyo
Key takeaways:
\n
) to control line breaks when writing text files.with open(...)
context manager automatically closes the file—preferred practice."w"
truncates existing content; use "a"
to append instead.info.txt
containing the following lines:
Python
Solviyo
Exercises
with open("info.txt", "r") as f:
content = f.readline()
print(content.strip())
# Code to read first line from a file
with open("info.txt", "r") as f:
content = f.readline()
print(content.strip())
# Output:
# Python
Reading the first line from a text file
readline()
method reads only the first line from the file, which is "Python". The strip()
method removes the newline character at the end.readline()
reads only one line; to read all lines, read()
or readlines()
should be used.readlines()
method would return a list, but readline()
does not.Step-by-step reasoning:
info.txt
contains three lines.open("info.txt", "r")
opens the file in read mode.readline()
reads the first line only: "Python\n"
.strip()
removes the trailing newline character \n
.print()
outputs the clean text Python
.Key takeaways:
readline()
reads one line at a time, making it memory-efficient for large files.strip()
is useful to remove newline and extra whitespace characters.with open(...)
to automatically close the file after reading.with open()
for file operations is correct?Using with open()
for file handling
with open()
, Python automatically closes the file; you do not need to call f.close()
manually.with open()
works for all modes – reading, writing, appending, etc.'a'
.with open()
ensures that the file is properly closed after the block ends, even if exceptions occur.Key takeaways:
with open(...)
over manually opening and closing files.'r'
, 'w'
, 'a'
, etc.log.txt
without overwriting the existing content. Which mode should you use?Appending text to an existing file
'r'
opens the file in read-only mode. You cannot write or append.'w'
opens the file in write mode and truncates (erases) the existing content.'r+'
allows reading and writing but starts at the beginning; existing content may be overwritten if not handled carefully.'a'
mode opens the file for appending. New data is written **at the end** of the file without affecting existing content.Step-by-step reasoning:
with open("log.txt", "a") as f
.f.write("New log entry\n")
.Key takeaways:
'a'
mode whenever you need to add data without losing previous content.with open(...)
ensures safe and clean file closure.example.txt
and prints each line in uppercase, ensuring the file is safely closed afterward?Reading a file line by line with safe file handling
f.close()
is called.f.read()
reads the entire content as a single string; upper()
converts everything at once, but iterating line by line is preferred for large files.with open(...)
to ensure the file is automatically closed. Iterates over each line, printing it in uppercase individually.readlines()
returns a list of lines; calling upper()
directly on a list raises an error.Step-by-step reasoning:
with open("example.txt", "r") as f
safely opens the file for reading.f
retrieves one line at a time, efficient for memory.line.upper()
converts the line to uppercase before printing.Key takeaways:
with open(...)
to ensure files are properly closed.read()
for large files.upper()
on lists vs strings.data.txt
and remove the trailing newline characters from each line. Which snippet achieves this correctly?Reading all lines from a file and stripping newlines
f.readlines()
reads all lines as a list. Using a list comprehension with strip()
removes trailing newline characters for each line.f.read()
returns a single string, so iterating over it splits it into individual characters, not lines.f
works, the file is opened without with
context manager; if an exception occurs, the file may not be properly closed.readlines()
returns a list of strings, and lists do not have an upper()
method; this would raise an error.Step-by-step reasoning:
with open("data.txt", "r") as f
safely opens the file for reading.f.readlines()
returns a list of all lines, each ending with a newline character.[line.strip() for line in f.readlines()]
removes trailing newlines.lines
shows the clean list of strings.Key takeaways:
strip()
to clean up newline characters from file lines.with open(...)
to handle files safely.Manipulating the file pointer using seek()
f.read()
, the file pointer is at the end. Using f.seek(0)
moves the pointer back to the beginning, allowing f.read()
to read the content again.f.seek(10)
moves the pointer to the 10th byte, not the beginning. Reading from there will not include the first 10 characters.f.tell()
just returns the current pointer position; it does not move it. The second f.read()
would return an empty string since the pointer is already at the end.f.read(0)
reads zero characters, so nothing is printed.Step-by-step reasoning:
f.read()
moves the file pointer to the end.f.seek(0)
resets the pointer to the start of the file.f.read()
reads the entire content again from the beginning.Key takeaways:
seek(offset)
moves the file pointer to the specified byte position.tell()
to check the current position of the file pointer.bigdata.txt
. Which of the following is the most memory-efficient way to read and process it line by line?Reading large files efficiently line by line
readlines()
reads the entire file into memory at once, which can be memory-intensive for very large files.f.read()
loads the whole file into memory, which defeats memory efficiency for large files.readlines()
consumes memory for the entire file, and manually opening/closing adds risk of not closing if an exception occurs.Step-by-step reasoning:
with open(...)
ensures safe closure.f
reads one line at a time.process(line)
can perform any required operation on each line.Key takeaways:
read()
or readlines()
which load the whole file into memory.with open(...)
for safe file handling.Checking if a file exists before opening
FileNotFoundError
. Although exceptions can be handled, pre-checking is safer in many cases.os.remove()
deletes the file; it does not check for existence.if "filename.txt":
always evaluates to True because it is a non-empty string; it does not check file existence.os.path.exists("filename.txt")
safely checks if the file exists before attempting to open it.Example Code:
import os
if os.path.exists("filename.txt"):
with open("filename.txt", "r") as f:
print(f.read())
else:
print("File does not exist.")
Key takeaways:
os.path.exists()
to check file existence before reading or writing.with open(...)
ensures safe file handling.[10, 20, 30, 40]
to a binary file numbers.bin
and then read them back. Which snippet correctly does this?Writing and reading integers to/from a binary file
bytes(n)
expects an iterable of integers (0–255), so bytes(10)
creates 10 null bytes, not the integer value itself.n.to_bytes(2, byteorder='big')
converts each integer into 2 bytes. Reading back and using int.from_bytes(...)
reconstructs the original integers.str(numbers).encode()
stores a string representation; reading back gives a bytes object of the string, not original integers.f.write(numbers)
raises a TypeError
; lists cannot be directly written as bytes.Step-by-step reasoning:
n.to_bytes(2, byteorder='big')
.numbers.bin
.f.read()
.int.from_bytes(...)
.Key takeaways:
to_bytes()
and from_bytes()
are essential for converting integers to and from byte representations."Solviyo – Python Exercises"
. Which snippet correctly writes this text to a file in UTF-16 encoding and reads it back?Writing and reading text with specific encoding (UTF-16)
encoding="utf-16"
. Writing and reading preserve Unicode characters correctly.f.write(text.encode("utf-16"))
returns bytes, but the file is opened in text mode, which expects a string. This raises TypeError
.Step-by-step reasoning:
encoding="utf-16"
to handle Unicode correctly.Example Code:
text = "Solviyo – Python Exercises"
with open("exercise.txt", "w", encoding="utf-16") as f:
f.write(text)
with open("exercise.txt", "r", encoding="utf-16") as f:
content = f.read()
print(content) # Output: Solviyo – Python Exercises
Key takeaways:
sample.txt
containing multiple lines. You want to read the first 10 characters, move back 5 characters, and then read the next 10 characters. Which snippet achieves this correctly?Using seek()
with relative position to re-read part of a file
f.read(10)
reads the first 10 characters. f.seek(-5, 1)
moves the pointer 5 characters back from the current position (relative seek), so the next f.read(10)
reads the correct range.f.seek(5)
moves the pointer to the 6th character from the beginning, not relative to the current pointer, so the next read skips characters.f.seek(-5)
in text mode without specifying relative position (whence=0
by default) may raise UnsupportedOperation
error.f.read(5)
reads fewer characters, and f.seek(10)
moves pointer to 11th character, skipping part of the desired data.Step-by-step reasoning:
f.seek(-5, 1)
: move back 5 characters from current pointer → pointer at position 5.Example Code:
with open("sample.txt", "r") as f:
first_part = f.read(10)
f.seek(-5, 1)
second_part = f.read(10)
print(first_part)
print(second_part)
Key takeaways:
seek(offset, whence)
allows moving the file pointer relative to beginning (0), current (1), or end (2).whence=1
or whence=2
to move backwards.bigfile.txt
in chunks of 1024 bytes to avoid memory issues. Which snippet correctly does this?Reading a large file in fixed-size chunks
while True
loop to read 1024 bytes at a time. The loop breaks when f.read(1024)
returns an empty string (end of file). Each chunk is processed efficiently without loading the entire file into memory.f.read(1024)
returns a string of 1024 characters; iterating over it loops character by character, not chunk by chunk.f.readall()
does not exist in Python; also reads entire file into memory.f.read()
reads the entire file at once, which is memory-inefficient for large files.Step-by-step reasoning:
with open("bigfile.txt", "r") as f
.f.read(1024)
to read up to 1024 characters (bytes in text mode) at a time.chunk
is empty; if yes, break the loop.process(chunk)
.Example Code:
def process(chunk):
print("Processing chunk of size:", len(chunk))
with open("bigfile.txt", "r") as f:
while True:
chunk = f.read(1024)
if not chunk:
break
process(chunk)
Key takeaways:
input.txt
, convert all text to uppercase, and write it to output.txt
in a memory-efficient way. Which snippet correctly does this?Reading from one file, transforming, and writing to another efficiently
line.upper()
, and writes it immediately to the output file. Memory-efficient for large files.fin.read()
reads the entire file into memory; inefficient for very large files.Step-by-step reasoning:
with open(...)
to ensure safe closure.input.txt
to avoid loading entire file into memory.line.upper()
.output.txt
.Example Code:
with open("input.txt", "r") as fin, open("output.txt", "w") as fout:
for line in fin:
fout.write(line.upper())
Key takeaways:
with open(...)
ensures files are closed safely even if errors occur..txt
files in a directory data/
and counts the total number of lines. Which option correctly implements this?
import os
def count_lines(dir_path):
total_lines = 0
for entry in os.listdir(dir_path):
path = os.path.join(dir_path, entry)
if os.path.isdir(path):
total_lines += count_lines(path)
elif path.endswith(".txt"):
with open(path, "r") as f:
total_lines += len(f.readlines())
return total_lines
Recursively reading all .txt files in a directory
os.listdir(dir_path)
returns names without path, so open(entry, "r")
may fail.os.path.isdir(path)
to check directories, recurses, and sums line counts from all .txt files.total_lines
could be overwritten in loops instead of accumulated.Step-by-step reasoning:
os.listdir(dir_path)
to get all entries in the directory.os.path.isdir(path)
. If yes, recurse.len(f.readlines())
.total_lines
and return after processing all entries.Example Usage:
total = count_lines("data/")
print("Total lines in all .txt files:", total)
Key takeaways:
os.path.join()
to get the full path.example.txt
, move 7 characters back, and then read 10 characters. Which snippet correctly achieves this?Manipulating file pointer to re-read a portion of a file
f.seek(7)
moves the pointer to 8th character from the beginning, not relative to current position, so the second read is incorrect.f.read(15)
reads the first 15 characters. f.seek(-7, 1)
moves 7 characters back relative to the current pointer, allowing the next f.read(10)
to correctly read overlapping data.f.seek(0)
moves pointer to the beginning, so second read duplicates first 10 characters instead of reading the intended section.Step-by-step reasoning:
f.seek(-7, 1)
: move back 7 characters → pointer at position 8.Example Usage:
with open("example.txt", "r") as f:
first_part = f.read(15)
f.seek(-7, 1)
second_part = f.read(10)
print(first_part)
print(second_part)
Key takeaways:
seek(offset, whence)
allows moving the pointer relative to current position (whence=1
), beginning (0), or end (2).largefile.txt
in 2048-byte chunks and write each chunk to output.txt
, ensuring all data is written correctly even if the last chunk is smaller. Which snippet achieves this?Buffered reading and writing of a large file
while True
loop to read the file in 2048-byte chunks, writing each chunk to the output. Loop ends when fin.read(2048)
returns empty, ensuring all data is written correctly.fin
reads line by line (text mode default), not in 2048-byte chunks, so chunk size control is lost.Step-by-step reasoning:
rb
and wb
).fin.read(2048)
to read 2048-byte chunks.chunk
is empty; if yes, break the loop.output.txt
immediately using fout.write(chunk)
.Example Code:
with open("largefile.txt", "rb") as fin, open("output.txt", "wb") as fout:
while True:
chunk = fin.read(2048)
if not chunk:
break
fout.write(chunk)
Key takeaways:
rb
/wb
) ensures exact byte copying without encoding issues.logs.txt
line by line and write only lines containing the word "ERROR"
to error_logs.txt"
. Which snippet correctly achieves this?Filtering lines from a file based on a condition
"ERROR"
is in the line, and writes it immediately. Efficient and correct for large files.Step-by-step reasoning:
with
for safe closure.logs.txt
."ERROR"
.error_logs.txt
.Example Usage:
with open("logs.txt", "r") as fin, open("error_logs.txt", "w") as fout:
for line in fin:
if "ERROR" in line:
fout.write(line)
Key takeaways:
with
ensures files are closed even if errors occur.data.bin
containing ASCII text. You want to read it, convert all lowercase letters to uppercase, and write to output.txt
. Which snippet correctly achieves this?Converting binary file content to uppercase text
"r"
), which may fail or misinterpret bytes; also may not handle non-ASCII bytes correctly."rb"
mode, reads bytes, decodes as ASCII, converts to uppercase using .upper()
, and writes to output in text mode.Step-by-step reasoning:
data.bin
in binary read mode ("rb"
).fin.read()
.data.decode("ascii")
..upper()
.output.txt
in write mode and write transformed text.Example Usage:
with open("data.bin", "rb") as fin, open("output.txt", "w") as fout:
data = fin.read()
text = data.decode("ascii").upper()
fout.write(text)
Key takeaways:
Practicing Python Working with Files? Don’t forget to test yourself later in our Python Quiz.
When we talk about real-world Python projects, file handling is one of those skills that truly separate beginners from confident programmers. At Solviyo, we’ve built a complete set of Python file handling exercises with explanations and answers to help you master how Python works with files — reading, writing, appending, and managing data efficiently.
We start with the basics, where you’ll get comfortable opening files, reading their content, and writing new data into them. You’ll learn how to use the built-in open()
function, the difference between modes like 'r'
, 'w'
, and 'a'
, and how to properly close files after operations. Each exercise is crafted carefully, combining practical examples and short Python MCQs to help you build a deeper understanding. Every question includes both the correct answer and a clear explanation — so you’re not just memorizing syntax, you’re actually learning how file handling works in real scenarios.
As we move forward, we explore more advanced file operations — like reading files line by line, working with file pointers, handling binary data, and managing exceptions while working with files. These are the kind of situations you’ll face when building automation scripts or data-driven applications. Our exercises are designed to make you think through these situations naturally, just as you would while writing real Python code at work.
We also cover best practices that every Python developer should know — like using with open()
statements to handle files safely, dealing with file paths using the os
and pathlib
modules, and avoiding common mistakes that lead to file corruption or data loss. The goal isn’t just to solve problems but to help you develop habits that make your code cleaner, more reliable, and easier to maintain.
For learners preparing for interviews or online assessments, you’ll find Python file handling MCQs with answers that test your understanding of concepts like file modes, reading methods, and context management. These quick checks are great for revision and ensure that you’re confident with both the theory and practical parts of file handling.
At Solviyo, we believe that learning Python should feel practical and enjoyable. Our file handling exercises with explanations and answers give you the clarity and confidence to handle any file-based task in Python — from simple text files to more complex binary or structured data files. Dive in and practice with us — mastering file operations in Python has never been this easy and well-explained.