Developer Guide¶
This guide provides comprehensive information for developers who want to extend, customize, or contribute to Cute Plot.
Architecture Overview¶
Technology Stack¶
- GUI Framework: DearCyGui (Python bindings for Dear ImGui)
- Data Processing: Polars for high-performance data manipulation
- Visualization: OpenGL-based rendering via DearCyGui
- Downsampling: tsdownsample (MinMaxLTTB algorithm)
- File I/O: Native Python with format-specific parsers
- Concurrency: Threading for file loading and processing
Core Components¶
Main Application (cuteplot_dcg.py
)¶
- Entry point: Application initialization and main loop
- UI creation: Primary interface construction
- Event handling: Main event loop and viewport management
- Global coordination: Coordinates between major components
File Loading System (utils/file/
)¶
- Modular design: Plugin-based file format support
- Lazy loading: Efficient handling of large files
- Format detection: Automatic format identification
- Processing pipeline: Standardized data processing workflow
Plotting Engine (plotting/
)¶
- Subplot management: Multi-subplot grid system
- Performance optimization: Pyramid downsampling for smooth interaction
- Interactive features: Zoom, pan, query, annotations
- Series management: Data series lifecycle and rendering
Sidebar System (sidebar/
)¶
- File widgets: File and series representation
- Selection management: Multi-select and search capabilities
- Drag & drop: Integration with plotting system
- Real-time updates: Dynamic UI updates
Template System (templates/
)¶
- YAML-based: Human-readable template format
- Pattern matching: Flexible series pattern matching
- Automation: Automated plot setup
- Extensibility: Easy template creation and modification
Development Environment Setup¶
Prerequisites¶
# Python 3.8+ with development tools
python --version # Should be 3.8 or higher
# Git for version control
git --version
# Virtual environment (recommended)
python -m venv cuteplot-dev
source cuteplot-dev/bin/activate # Linux/Mac
# cuteplot-dev\Scripts\activate # Windows
Development Installation¶
# Clone repository
git clone https://github.com/your-repo/cuteplot.git
cd cuteplot
# Install in development mode
pip install -e .
# Install development dependencies
pip install -r requirements-dev.txt
# Install pre-commit hooks
pre-commit install
Development Dependencies¶
# Core dependencies (automatically installed)
dearcygui>=2.0.0
polars>=0.20.0
tsdownsample>=0.1.3
numpy>=1.21.0
natsort>=8.0.0
# Development dependencies
pytest>=7.0.0
black>=22.0.0
flake8>=4.0.0
mypy>=0.910
pre-commit>=2.15.0
Code Structure and Organization¶
Module Layout¶
cuteplot/
├── cuteplot_dcg.py # Main application entry point
├── utils/ # Utility modules
│ ├── file/ # File loading system
│ │ ├── __init__.py # Loader registry
│ │ ├── base.py # Base loader class
│ │ ├── csv.py # CSV loader
│ │ ├── pscad.py # PSCAD loader
│ │ └── psse.py # PSS/E loader
│ ├── theme.py # UI theming
│ ├── font.py # Font management
│ ├── logger.py # Logging system
│ ├── notifications.py # User notifications
│ └── series_transformer.py # Data transformations
├── plotting/ # Plotting engine
│ ├── widget.py # Main plotting widgets
│ ├── zoom.py # Zoom functionality
│ ├── plot_query.py # Query system
│ ├── markup.py # Annotations
│ └── draggable/ # Draggable elements
│ ├── annotation.py # Draggable annotations
│ ├── drag_lines.py # Draggable lines
│ └── linked_vline.py # Linked vertical lines
├── sidebar/ # Sidebar system
│ ├── widget.py # File and series widgets
│ ├── globals.py # Global state management
│ └── __init__.py
├── templates/ # Template system
│ ├── template.py # Core template classes
│ └── ui.py # Template UI components
└── handlers/ # Event handlers
└── tab_bar.py # Tab bar handlers
Design Patterns¶
Plugin Architecture (File Loaders)¶
# Base class defines interface
class BaseFileLoader(ABC):
@abstractmethod
def supports_lazy_loading(self) -> bool:
pass
@abstractmethod
def load_normal(self) -> LoadResult:
pass
@abstractmethod
def load_lazy(self) -> LoadResult:
pass
# Concrete implementations
class CSVLoader(BaseFileLoader):
# Implementation specific to CSV files
pass
# Registry system
LOADERS = {
'.csv': CSVLoader,
'.out': PSSELoader,
'.inf': PSCADLoader,
}
Observer Pattern (UI Updates)¶
# Global state management
class AppGlobals:
def __init__(self):
self.observers = []
def add_observer(self, observer):
self.observers.append(observer)
def notify_observers(self, event):
for observer in self.observers:
observer.update(event)
Strategy Pattern (Downsampling)¶
# Different downsampling strategies
class MinMaxLTTBDownsampler:
def downsample(self, x, y, n_out):
# MinMaxLTTB algorithm implementation
pass
class LTTBDownsampler:
def downsample(self, x, y, n_out):
# Pure LTTB algorithm implementation
pass
Extending Cute Plot¶
Adding New File Formats¶
1. Create Loader Class¶
# utils/file/custom.py
from .base import BaseFileLoader, LoadResult
import polars as pl
class CustomFormatLoader(BaseFileLoader):
def supports_lazy_loading(self) -> bool:
return True # or False depending on format
def load_normal(self) -> LoadResult:
try:
# Load file completely into memory
raw_data = self._read_custom_format()
df = self._convert_to_polars(raw_data)
df = self.process_dataframe(df)
return LoadResult(
df_list=[(df, str(self.file_path))],
filename_list=[self.file_path.stem],
is_lazy=False
)
except Exception as e:
# Handle errors appropriately
return LoadResult(df_list=[], filename_list=[])
def load_lazy(self) -> LoadResult:
# Implement lazy loading if supported
pass
def _read_custom_format(self):
# Custom format reading logic
pass
def _convert_to_polars(self, raw_data):
# Convert to Polars DataFrame
pass
2. Register Loader¶
# utils/file/__init__.py
from .custom import CustomFormatLoader
LOADERS = {
'.csv': CSVLoader,
'.out': PSSELoader,
'.inf': PSCADLoader,
'.custom': CustomFormatLoader, # Add new loader
}
Creating Custom Annotations¶
1. Base Annotation Class¶
# plotting/draggable/custom_annotation.py
from .annotation import DragPoint
class CustomAnnotation(DragPoint):
def __init__(self, context, *args, **kwargs):
super().__init__(context, *args, **kwargs)
self._custom_properties = {}
def setup_callbacks(self):
# Override to customize behavior
super().setup_callbacks()
def on_custom_event(self):
# Custom event handling
pass
2. Integration with Plot Menu¶
# plotting/widget.py - in PlotMenuHandler
def plot_menu(self, s, t, d):
# Existing menu items...
dcg.MenuItem(
self.C,
label="Add custom annotation",
callback=lambda s, t, d: self.add_custom_annotation(
x_mouse_coord, y_mouse_coord
),
)
Adding Analysis Functions¶
1. Create Analysis Module¶
# utils/analysis/custom_analysis.py
import numpy as np
from typing import Dict, Any
def custom_statistical_analysis(data: np.ndarray) -> Dict[str, Any]:
"""Custom statistical analysis function."""
return {
'custom_metric_1': calculate_custom_metric_1(data),
'custom_metric_2': calculate_custom_metric_2(data),
'custom_description': "Custom analysis results"
}
def calculate_custom_metric_1(data):
# Implementation
pass
def calculate_custom_metric_2(data):
# Implementation
pass
2. Integration with Query System¶
# plotting/plot_query.py
from utils.analysis.custom_analysis import custom_statistical_analysis
def _perform_analysis(self, x_data, y_data):
# Existing analysis...
# Add custom analysis
if query_settings.get_setting("Custom Analysis"):
custom_results = custom_statistical_analysis(y_data)
analysis_results.update(custom_results)
Performance Optimization¶
Memory Management¶
Efficient Data Structures¶
# Use appropriate data types
df = df.with_columns([
pl.col("time_column").cast(pl.Float32), # Smaller than Float64
pl.col("data_column").cast(pl.Float32),
])
# Memory-map large files when possible
data = np.memmap(filename, dtype='float32', mode='r')
Garbage Collection¶
# Explicit cleanup in critical sections
def cleanup_resources(self):
self._large_data_structure = None
self._cached_computations.clear()
gc.collect()
Computational Optimization¶
Vectorized Operations¶
# Use NumPy/Polars vectorized operations
# Good:
result = np.sqrt(data ** 2 + other_data ** 2)
# Avoid:
result = []
for i in range(len(data)):
result.append(math.sqrt(data[i]**2 + other_data[i]**2))
Parallel Processing¶
# Use ThreadPoolExecutor for I/O bound tasks
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
futures = [executor.submit(process_file, f) for f in files]
results = [future.result() for future in futures]
# Use ProcessPoolExecutor for CPU-bound tasks
with concurrent.futures.ProcessPoolExecutor() as executor:
results = list(executor.map(cpu_intensive_function, data_chunks))
Testing Guidelines¶
Unit Testing¶
# tests/test_file_loading.py
import pytest
from utils.file.csv import CSVLoader
from pathlib import Path
class TestCSVLoader:
def test_csv_loader_creation(self):
loader = CSVLoader("test_data.csv")
assert loader.file_path.name == "test_data.csv"
def test_normal_loading(self, sample_csv_file):
loader = CSVLoader(sample_csv_file)
result = loader.load_normal()
assert len(result.df_list) > 0
assert result.is_lazy == False
@pytest.fixture
def sample_csv_file(self, tmp_path):
# Create sample CSV for testing
csv_content = "time,value1,value2\n0.0,1.0,2.0\n0.1,1.1,2.1\n"
csv_file = tmp_path / "sample.csv"
csv_file.write_text(csv_content)
return str(csv_file)
Integration Testing¶
# tests/test_plotting_integration.py
import pytest
from plotting.widget import SubplotWidget
class TestPlottingIntegration:
def test_series_addition_to_plot(self, sample_subplot, sample_series):
# Test complete workflow from file loading to plotting
subplot = sample_subplot
series_data = sample_series
subplot.add_series(series_data)
assert len(subplot.plot_list[0].children) > 0
# Additional assertions...
Performance Testing¶
# tests/test_performance.py
import time
import pytest
class TestPerformance:
def test_large_file_loading_performance(self, large_csv_file):
start_time = time.time()
loader = CSVLoader(large_csv_file)
result = loader.load_lazy()
load_time = time.time() - start_time
assert load_time < 2.0 # Should load within 2 seconds
@pytest.mark.benchmark
def test_downsampling_performance(self, large_dataset):
# Benchmark downsampling algorithms
pass
Code Style and Standards¶
Python Style¶
Follow PEP 8 with these specific guidelines:
# Import order
import os # Standard library
import sys
import numpy as np # Third-party
import polars as pl
from utils.logger import log_info # Local imports
from sidebar.widget import SeriesWidget
Documentation Standards¶
def complex_function(data: np.ndarray, threshold: float = 0.5) -> Dict[str, Any]:
"""
Perform complex analysis on time-series data.
This function analyzes time-series data to extract meaningful metrics
and identify patterns based on the provided threshold.
Args:
data: Input time-series data as NumPy array
threshold: Analysis threshold value (default: 0.5)
Returns:
Dictionary containing analysis results:
- 'metric1': First analysis metric
- 'metric2': Second analysis metric
- 'summary': Text summary of results
Raises:
ValueError: If data is empty or threshold is invalid
Example:
>>> data = np.array([1, 2, 3, 4, 5])
>>> results = complex_function(data, threshold=0.3)
>>> print(results['summary'])
"""
# Implementation...
Error Handling¶
def robust_function():
try:
# Main logic
result = perform_operation()
return result
except FileNotFoundError as e:
log_error(f"File not found: {e}")
show_error_notification(C, f"File not found: {e.filename}")
return None
except ValueError as e:
log_warning(f"Invalid value: {e}")
show_warning_notification(C, "Invalid data format")
return None
except Exception as e:
log_error(f"Unexpected error: {e}")
show_error_notification(C, "An unexpected error occurred")
raise # Re-raise for debugging in development
Contributing Guidelines¶
Development Workflow¶
1. Branch Strategy¶
# Create feature branch
git checkout -b feature/new-file-format
# Make changes
git add .
git commit -m "Add support for new file format"
# Push and create pull request
git push origin feature/new-file-format
2. Commit Messages¶
Follow conventional commit format:
feat: add support for HDF5 file format
fix: resolve memory leak in downsampling
docs: update installation guide
test: add performance tests for file loading
refactor: simplify template matching logic
3. Code Review Process¶
- Self-review: Review your own changes before submission
- Automated checks: Ensure all CI checks pass
- Peer review: Address reviewer feedback
- Documentation: Update documentation for new features
Pull Request Template¶
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests pass
- [ ] Performance impact evaluated
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] No new warnings introduced
Debugging and Troubleshooting¶
Debug Tools¶
Logger Integration¶
from utils.logger import log_info, log_warning, log_error
def debug_function():
log_info("Starting function execution")
try:
result = complex_operation()
log_info(f"Operation successful: {result}")
return result
except Exception as e:
log_error(f"Operation failed: {e}")
raise
Performance Profiling¶
import cProfile
import pstats
def profile_function():
pr = cProfile.Profile()
pr.enable()
# Code to profile
expensive_operation()
pr.disable()
stats = pstats.Stats(pr)
stats.sort_stats('cumulative')
stats.print_stats()
Common Development Issues¶
Memory Issues¶
# Monitor memory usage
import psutil
import os
def check_memory():
process = psutil.Process(os.getpid())
memory_info = process.memory_info()
print(f"Memory usage: {memory_info.rss / 1024 / 1024:.1f} MB")
Threading Issues¶
# Thread-safe operations
import threading
class ThreadSafeCache:
def __init__(self):
self._cache = {}
self._lock = threading.Lock()
def get(self, key):
with self._lock:
return self._cache.get(key)
def set(self, key, value):
with self._lock:
self._cache[key] = value
Future Development¶
Planned Enhancements¶
- Additional file formats: Excel, Parquet, HDF5
- Advanced analysis: Machine learning integration
- Collaboration features: Shared sessions, comments
- Export capabilities: PDF, SVG, high-resolution images
- Plugin system: Third-party plugin support
Architecture Evolution¶
- Modular plugins: Fully pluggable architecture
- Web interface: Browser-based version
- Cloud integration: Cloud storage and processing
- Real-time data: Live data streaming support
Development tip: Start by exploring existing code patterns and gradually introduce changes while maintaining consistency with the established architecture