Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
How to run multiple Python files in a folder one after another?
The subprocess module can be used to run multiple Python files in a folder one after another. Running multiple files sequentially is required in various situations like processing large datasets, performing complex analysis, or automating workflows. In this article, we will discuss different methods for running multiple Python files in a folder sequentially with practical examples.
Method 1: Using subprocess Module
The subprocess module provides a powerful way to spawn new processes and run external commands. Here's how to use it for running Python files sequentially ?
Step 1: Create Sample Python Files
First, let's create three Python files with simple content to demonstrate sequential execution ?
file1.py
print("Executing file1.py - Data preprocessing started")
print("Data preprocessing completed")
file2.py
print("Executing file2.py - Model training started")
print("Model training completed")
file3.py
print("Executing file3.py - Results generation started")
print("Results generation completed")
Step 2: Create the Main Runner Script
Now create a run_script.py file that uses subprocess to execute the other files ?
import subprocess
import sys
files = ['file1.py', 'file2.py', 'file3.py']
for file in files:
print(f"Running {file}...")
result = subprocess.run([sys.executable, file], capture_output=True, text=True)
print(result.stdout)
if result.returncode != 0:
print(f"Error in {file}: {result.stderr}")
break
print(f"{file} completed successfully\n")
Running file1.py... Executing file1.py - Data preprocessing started Data preprocessing completed file1.py completed successfully Running file2.py... Executing file2.py - Model training started Model training completed file2.py completed successfully Running file3.py... Executing file3.py - Results generation started Results generation completed file3.py completed successfully
Method 2: Using Import and Function Calls
This method imports each file as a module and calls specific functions. First, modify the Python files to include a main() function ?
file1.py
def main():
print("Processing data in file1")
return "File1 completed"
if __name__ == "__main__":
main()
file2.py
def main():
print("Processing data in file2")
return "File2 completed"
if __name__ == "__main__":
main()
file3.py
def main():
print("Processing data in file3")
return "File3 completed"
if __name__ == "__main__":
main()
Create the runner script ?
import file1
import file2
import file3
# Execute files sequentially
modules = [file1, file2, file3]
for i, module in enumerate(modules, 1):
print(f"Executing module {i}:")
result = module.main()
print(f"Result: {result}\n")
Executing module 1: Processing data in file1 Result: File1 completed Executing module 2: Processing data in file2 Result: File2 completed Executing module 3: Processing data in file3 Result: File3 completed
Method 3: Using os Module with Dynamic File Discovery
This approach automatically discovers and runs all Python files in a directory ?
import os
import subprocess
import sys
def run_python_files_in_directory(directory_path="."):
# Get all Python files in the directory
python_files = [f for f in os.listdir(directory_path)
if f.endswith('.py') and f != 'run_all.py']
# Sort files to ensure consistent execution order
python_files.sort()
print(f"Found {len(python_files)} Python files: {python_files}")
for file in python_files:
print(f"\n{'='*50}")
print(f"Executing: {file}")
print('='*50)
result = subprocess.run([sys.executable, file],
capture_output=True, text=True)
if result.returncode == 0:
print(result.stdout)
print(f"? {file} executed successfully")
else:
print(f"? Error in {file}:")
print(result.stderr)
return False
return True
# Run all Python files
success = run_python_files_in_directory()
print(f"\nAll files executed: {'Successfully' if success else 'With errors'}")
Comparison of Methods
| Method | Best For | Advantages | Disadvantages |
|---|---|---|---|
| subprocess | Independent scripts | Full isolation, error handling | Higher overhead |
| Import & Functions | Related modules | Fast, shared memory | Less isolation |
| Dynamic Discovery | Automated workflows | No hardcoding files | Less control over order |
Common Use Cases
Data Pipeline ? Sequential processing of data through cleaning, transformation, and analysis stages
Machine Learning Workflows ? Running preprocessing, training, validation, and evaluation scripts in order
Batch Processing ? Processing multiple datasets or performing repetitive tasks across files
Testing Suites ? Running multiple test files in a specific sequence
Conclusion
Running multiple Python files sequentially can be accomplished through subprocess for isolation, imports for speed, or dynamic discovery for automation. Choose the method based on your specific requirements for error handling, performance, and script independence.
