Sphinx Style API Documentation¶
Documentation Style
This page demonstrates API documentation using Sphinx-style docstrings, which provide rich reStructuredText markup capabilities for complex documentation needs.
Overview¶
Sphinx-style docstrings use reStructuredText field lists with :param:
and :return:
tags. They're particularly well-suited for:
- Complex enterprise projects
- Libraries with extensive documentation
- Projects using Sphinx documentation generator
- APIs requiring rich formatting
DataProcessor Class¶
src.docstring_examples.sphinx_style.DataProcessor
¶
Comprehensive data processor with loading, transformation, and export.
This class provides a complete data processing pipeline including data loading, transformation operations, validation, and export functionality. It supports various data formats and provides extensive configuration options.
The processor maintains internal state and provides detailed logging of all operations for debugging and monitoring purposes.
:param name: Descriptive name for this processor instance :type name: str :param validation_enabled: Whether to enable data validation :type validation_enabled: bool, optional :param max_transformations: Maximum number of transformations allowed :type max_transformations: int, optional
.. note:: This processor is thread-safe for read operations but not for concurrent modifications. Use appropriate locking mechanisms if sharing across threads.
.. warning:: The processor has a maximum transformation limit to prevent infinite loops or excessive memory usage.
Examples::
# Create processor with validation enabled
processor = DataProcessor("sales_data", validation_enabled=True)
# Load data from various sources
processor.load_data({"product": "Widget", "sales": 1000})
processor.load_from_file("additional_data.json")
# Apply transformations
processor.transform_data(
lambda x: x * 1.1 if isinstance(x, (int, float)) else x
)
processor.apply_filter(lambda item: item.get("sales", 0) > 500)
# Export results
processor.export_data("processed_results.json")
.. versionadded:: 1.0.0 Initial implementation with basic processing capabilities
Initialize the data processor.
:param name: Descriptive name for this processor instance :type name: str :param validation_enabled: Whether to enable data validation :type validation_enabled: bool, optional :param max_transformations: Maximum number of transformations allowed :type max_transformations: int, optional :raises ValueError: If name is empty or max_transformations is negative
Examples::
# Basic processor
processor = DataProcessor("basic_processor")
# Advanced processor with custom settings
processor = DataProcessor(
name="advanced_processor",
validation_enabled=True,
max_transformations=50
)
Source code in src/docstring_examples/sphinx_style.py
Attributes¶
status
property
¶
Get the current processor status.
:returns: String indicating current status ("active" or "inactive") :rtype: str
.. versionadded:: 1.0.0 Added status property for monitoring processor state
Functions¶
load_data
¶
Load data into the processor.
This method accepts data in dictionary or list format and stores it internally for subsequent processing operations. The data is validated if validation is enabled.
:param data: Data to load - either a single dictionary or list of dictionaries :type data: dict or list of dict :raises ProcessingError: If data validation fails or processor is inactive :raises TypeError: If data is not in expected format
Examples::
# Load single record
processor.load_data({"id": 1, "name": "Alice", "score": 95})
# Load multiple records
processor.load_data([
{"id": 1, "name": "Alice", "score": 95},
{"id": 2, "name": "Bob", "score": 87}
])
.. note::
Data validation is performed if :attr:validation_enabled
is True.
.. seealso::
:meth:load_from_file
for loading data from files
Source code in src/docstring_examples/sphinx_style.py
load_from_file
¶
Load data from a JSON file.
Reads data from the specified file path and loads it into the processor. Supports both string paths and Path objects.
:param file_path: Path to the JSON file to load :type file_path: str or pathlib.Path :raises ProcessingError: If file cannot be read or contains invalid JSON :raises FileNotFoundError: If the specified file does not exist :raises PermissionError: If insufficient permissions to read the file
Examples::
# Load from string path
processor.load_from_file("data/input.json")
# Load from Path object
from pathlib import Path
processor.load_from_file(Path("data") / "input.json")
.. note::
This method internally calls :meth:load_data
after reading the file.
.. versionadded:: 1.0.0 Added file loading capability
Source code in src/docstring_examples/sphinx_style.py
transform_data
¶
Apply a transformation function to all data values.
Applies the provided transformation function to each value in the loaded data. The transformation preserves the data structure while modifying individual values.
:param transformation_func: Function to apply to each data value. Should accept any value and return the transformed value. :type transformation_func: callable :returns: Dictionary containing transformation results :rtype: dict :raises ProcessingError: If no data is loaded, processor is inactive, or max transformations exceeded :raises ValueError: If transformation_func is not callable
The returned dictionary contains the following keys:
records_processed
(int): Number of records processedtransformations_applied
(int): Total transformations applied to this datasetsuccess
(bool): Whether the transformation completed successfully
Examples::
# Convert all strings to uppercase
result = processor.transform_data(
lambda x: x.upper() if isinstance(x, str) else x
)
# Apply mathematical transformation to numbers
result = processor.transform_data(
lambda x: x * 1.1 if isinstance(x, (int, float)) else x
)
# Complex transformation with type checking
def complex_transform(value):
if isinstance(value, str):
return value.strip().title()
elif isinstance(value, (int, float)):
return round(value * 1.05, 2)
return value
result = processor.transform_data(complex_transform)
.. warning:: The processor enforces a maximum number of transformations to prevent infinite loops or excessive resource usage.
.. versionadded:: 1.0.0 Initial transformation capability
Source code in src/docstring_examples/sphinx_style.py
367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 |
|
apply_filter
¶
Filter data records based on a predicate function.
Removes records that don't match the filter criteria. The filter function should return True for records to keep and False for records to remove.
:param filter_func: Predicate function that accepts a record dictionary and returns True to keep the record, False to remove it :type filter_func: callable :returns: Dictionary containing filter results :rtype: dict :raises ProcessingError: If no data is loaded or processor is inactive :raises ValueError: If filter_func is not callable
The returned dictionary contains the following keys:
records_before
(int): Number of records before filteringrecords_after
(int): Number of records after filteringrecords_removed
(int): Number of records removedsuccess
(bool): Whether the filter operation completed successfully
Examples::
# Keep only records with score > 80
result = processor.apply_filter(lambda record: record.get('score', 0) > 80)
# Keep records with specific status
result = processor.apply_filter(
lambda record: record.get('status') == 'active'
)
# Complex filter with multiple conditions
def complex_filter(record):
return (record.get('score', 0) > 70 and
record.get('active', False) and
len(record.get('name', '')) > 0)
result = processor.apply_filter(complex_filter)
.. note:: Filtering modifies the internal data structure by removing records that don't match the criteria.
.. versionadded:: 1.0.0 Added filtering capability
Source code in src/docstring_examples/sphinx_style.py
459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 |
|
export_data
¶
Export processed data to a file.
Saves the current processed data to the specified file path in the requested format. Currently supports JSON format with plans for additional formats in future versions.
:param file_path: Output file path for the exported data :type file_path: str or pathlib.Path :param format: Export format ("json" currently supported) :type format: str, optional :raises ProcessingError: If no data to export, processor inactive, or export fails :raises ValueError: If format is not supported :raises PermissionError: If insufficient permissions to write to the file
Examples::
# Basic JSON export
processor.export_data("output.json")
# Export with explicit format
processor.export_data("output.json", format="json")
# Export to Path object
from pathlib import Path
output_path = Path("exports") / "processed_data.json"
processor.export_data(output_path)
.. note:: The export operation creates parent directories if they don't exist.
.. todo:: Add support for CSV, XML, and other export formats.
.. versionadded:: 1.0.0 Initial JSON export capability
Source code in src/docstring_examples/sphinx_style.py
get_statistics
¶
Get comprehensive statistics about the processor and its data.
Returns detailed information about the current state of the processor, including data counts, transformation history, and processing metrics.
:returns: Dictionary containing comprehensive processor statistics :rtype: dict
The returned dictionary contains the following keys:
processor_name
(str): Name of this processor instanceprocessor_status
(str): Current status (active/inactive)data_loaded
(bool): Whether data is currently loadedrecord_count
(int): Number of records currently loadedtransformations_applied
(int): Number of transformations appliedexport_count
(int): Number of times data has been exportedvalidation_enabled
(bool): Whether validation is enabledcreated_at
(str): When the processor was created (ISO format)uptime_seconds
(float): How long the processor has existed in seconds
Examples::
stats = processor.get_statistics()
print(f"Processor: {stats['processor_name']}")
print(f"Records: {stats['record_count']}")
print(f"Transformations: {stats['transformations_applied']}")
.. note:: The uptime is calculated from processor creation time to current time.
.. versionadded:: 1.0.0 Added comprehensive statistics reporting
Source code in src/docstring_examples/sphinx_style.py
process
¶
Process data using the internal pipeline.
Implementation of the abstract process method from BaseProcessor. This method provides a simplified interface for basic data processing.
:param data: Data to process :type data: Any :returns: Processed data :rtype: Any :raises ProcessingError: If processing fails
.. note::
This is a simplified interface. For advanced processing, use the
specific methods like :meth:load_data
, :meth:transform_data
, etc.
.. versionadded:: 1.0.0 Implementation of abstract process method
Source code in src/docstring_examples/sphinx_style.py
deactivate
¶
Deactivate the processor.
Once deactivated, the processor should not perform any operations until reactivated.
.. note:: This method logs the deactivation event for monitoring purposes.
.. versionadded:: 1.0.0 Initial implementation of processor deactivation
Source code in src/docstring_examples/sphinx_style.py
ProcessingError Exception¶
src.docstring_examples.sphinx_style.ProcessingError
¶
ProcessingError(message: str, error_code: Optional[str] = None, original_error: Optional[Exception] = None)
Bases: Exception
Custom exception for data processing errors.
This exception is raised when data processing operations fail due to invalid data, configuration errors, or runtime issues.
:param message: Error message describing the failure :type message: str :param error_code: Optional error code for categorization :type error_code: str or None :param original_error: Original exception that caused this error :type original_error: Exception or None
.. versionadded:: 1.0.0 Initial implementation of ProcessingError
Initialize ProcessingError.
:param message: Descriptive error message :type message: str :param error_code: Optional categorization code :type error_code: str or None, optional :param original_error: The original exception if this is a wrapper :type original_error: Exception or None, optional
Module-Level Functions¶
Functions Coming Soon
Module-level function documentation will be added when the source code is available.
Example Usage¶
from docstring_examples.sphinx_style import DataProcessor
# Create a processor instance
processor = DataProcessor(
name="document_processor",
validation_enabled=True,
max_transformations=20
)
# Load and process document metadata
processor.load_data(document_data)
processor.transform_data(format_academic_text)
processor.apply_filter(substantial_docs_filter)
# Export processed metadata
processor.export_data('documents.json')
Style Benefits¶
Rich Markup¶
- Full reStructuredText support
- Cross-references and links
- Math notation support
- Complex formatting options
Documentation Features¶
- Type annotations with
:type:
tags - Detailed parameter descriptions
- Exception documentation
- See Also sections
Best Practices¶
- Use consistent field names
- Document all parameters and returns
- Include type information
- Leverage reStructuredText features