Dataflow Modeling¶
Schema-driven kernel modeling for ONNX-to-hardware transformation with efficient design space exploration.
Two-phase construction separates expensive setup from fast configuration: Design Space is built once and defines valid parameter ranges, while Design Point is configured many times to represent specific hardware instances. This enables efficient exploration by avoiding redundant computation.
KernelOp ¶
Bases: HWCustomOp, ABC
Kernel operator base class.
Shapes extracted from ModelWrapper context, never stored in nodeattrs. Subclasses implement build_schema() to construct their KernelSchema.
Caching Strategy
- design_space: Cached (expensive to build, invalidated on structural changes)
- design_point: Regenerated from nodeattrs (guarantees consistency)
For execution compatibility notes, see module docstring.
Source code in brainsmith/dataflow/kernel_op.py
design_point
property
¶
Current kernel configuration as design point (regenerated from nodeattrs).
This property regenerates on every access to ensure consistency with current nodeattrs. For better performance when accessing multiple properties, cache the design point in a local variable:
Example
GOOD: Cache locally for multiple accesses¶
point = self.design_point simd = point.inputs["input"].stream_shape[-1] width = point.inputs["input"].tensor_shape[-1] dtype = point.inputs["input"].datatype
AVOID: Multiple accesses trigger multiple rebuilds¶
simd = self.design_point.inputs["input"].stream_shape[-1] width = self.design_point.inputs["input"].tensor_shape[-1] dtype = self.design_point.inputs["input"].datatype
design_space
property
¶
Cached design space (call method with model_w first to initialize).
apply_design_point ¶
Apply chosen design point to nodeattrs (persist to ONNX).
Syncs design point configuration back to node attributes. The design_point property will regenerate from these nodeattrs on next access.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
point |
KernelDesignPoint
|
Design point to apply |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If point from different design space |
RuntimeError
|
If design space not initialized |
Example
DSE exploration¶
best_point = None best_cycles = float('inf') for point in op.design_space.sweep_dimension("SIMD"): ... if point.initiation_interval < best_cycles: ... best_cycles = point.initiation_interval ... best_point = point
Apply winner to node¶
op.apply_design_point(best_point)
Source code in brainsmith/dataflow/kernel_op.py
build_design_space ¶
FINN API compatibility: Build design space.
This method provides compatibility with FINN's getHWCustomOp() utility, which detects KernelOp via kernel_schema attribute and calls this method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_w |
ModelWrapper
|
ModelWrapper for graph context |
required |
Source code in brainsmith/dataflow/kernel_op.py
build_schema
abstractmethod
classmethod
¶
Build kernel schema from ONNX node.
Polymorphic method that handles both static and dynamic schemas: - Static schemas: return constant, ignore parameters - Dynamic schemas: inspect node structure to build schema
Called in two contexts: 1. During init: model=None (schema built for instance) 2. During can_infer_from(): model provided (schema built for validation)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
node |
NodeProto
|
ONNX node (provides inputs, outputs, attributes) |
required |
model |
ModelWrapper | None
|
Optional ModelWrapper (provides shapes, datatypes for validation context) |
required |
Returns:
| Type | Description |
|---|---|
KernelSchema
|
KernelSchema defining kernel structure |
Example (static schema): @classmethod def build_schema(cls, node, model): return LAYERNORM_SCHEMA
Example (dynamic schema): @classmethod def build_schema(cls, node, model): num_inputs = len(node.input) inputs = [InputSchema(name=f"input{i}", ...) for i in range(num_inputs)] return KernelSchema(name="Concat", inputs=inputs, outputs=[...])
Source code in brainsmith/dataflow/kernel_op.py
can_infer_from
classmethod
¶
Check if this kernel can transform the given ONNX node (default: no).
get_input_datatype ¶
get_nodeattr_types ¶
Return nodeattr registry (datatypes + user params + kernel params).
Auto-delegates to kernel_schema.build_nodeattr_registry() which includes: - Interface datatypes (input0Datatype, output0Datatype, etc.) - Internal datatypes (accumulatorDatatype, etc.) - Template parameters (SIMD, PE, etc.) - Kernel-specific parameters (epsilon, algorithm, etc.)
Automatically sets FIFO depth defaults based on kernel schema interface counts.
Only override if build_schema() needs to read nodeattrs (circular dependency). In that case, define nodeattrs explicitly before calling build_schema().
Source code in brainsmith/dataflow/kernel_op.py
get_normal_input_shape ¶
Return normal (unfolded) input shape as immutable tuple (FINN convention).
get_normal_output_shape ¶
Return normal (unfolded) output shape as immutable tuple (FINN convention).
get_number_output_values ¶
Get iteration count(s) for output values.
Matches FINN API pattern: - Single-output kernels: Returns int (iteration count) - Multi-output kernels: Returns dict mapping output names → iteration counts
Returns:
| Name | Type | Description |
|---|---|---|
int |
For single-output kernels (e.g., MVAU, Thresholding, AddStreams) |
|
dict |
For multi-output kernels (e.g., DuplicateStreams, Split) |
Examples:
Single-output: 512 Multi-output: {'out0': 512, 'out1': 512}
Source code in brainsmith/dataflow/kernel_op.py
get_output_datatype ¶
get_valid_ranges ¶
Valid parameter values for DSE (tiling + resource).
Returns:
| Type | Description |
|---|---|
dict[str, Union[OrderedParameter, frozenset]]
|
Dict mapping parameter names to OrderedParameter (ordered sequences) |
dict[str, Union[OrderedParameter, frozenset]]
|
or frozenset (discrete categories). |
Source code in brainsmith/dataflow/kernel_op.py
infer_from
classmethod
¶
Transform ONNX node to hardware kernel node(s).
infer_node_datatype ¶
Sync datatypes: model → nodeattrs (inputs), nodeattrs → model (outputs).
Initializes design space which syncs input datatypes from model to nodeattrs. Then propagates output datatypes from nodeattrs back to model.
Source code in brainsmith/dataflow/kernel_op.py
invalidate ¶
Invalidate cached design space after external graph changes.
Call this after transforms that change: - Tensor shapes (padding, reshape) - Datatypes in graph metadata - Node rewiring (FIFO insertion)
Next method call with model_w will rebuild design space automatically. Design points regenerate on every access, so no explicit invalidation needed.
Example
After transform changes graph¶
model = ApplyPadding().apply(model) for node in model.graph.node: ... op = getCustomOp(node) ... if isinstance(op, KernelOp): ... op.invalidate()
Source code in brainsmith/dataflow/kernel_op.py
make_shape_compatible_op ¶
Create standard ONNX op for shape inference (auto-detects pattern).
Source code in brainsmith/dataflow/kernel_op.py
set_nodeattr ¶
Set nodeattr and auto-invalidate design space if needed.
Design points regenerate on each access, so no explicit invalidation needed.
Source code in brainsmith/dataflow/kernel_op.py
Example:
import brainsmith.dataflow as df
from brainsmith.registry import kernel
from onnx import NodeProto, helper
from qonnx.core.modelwrapper import ModelWrapper
@kernel(description="Hardware LayerNorm", author="Your Name")
class LayerNorm(df.KernelOp):
"""Hardware LayerNorm kernel."""
@classmethod
def build_schema(cls, node: NodeProto, model: ModelWrapper) -> df.KernelSchema:
"""Define kernel structure."""
return LAYERNORM_SCHEMA
@classmethod
def can_infer_from(cls, node: NodeProto, model: ModelWrapper) -> bool:
"""Check if node can be converted to this kernel."""
return node.op_type == "FuncLayerNorm"
@classmethod
def infer_from(cls, node: NodeProto, model: ModelWrapper, insert_index: int):
"""Transform ONNX node to hardware kernel."""
hw_node = helper.make_node(
"LayerNorm",
inputs=list(node.input),
outputs=list(node.output),
domain="brainsmith.kernels",
)
return df.TransformationResult(
nodes_to_insert=[hw_node],
nodes_to_remove=[node]
)
KernelOpError ¶
Bases: Exception
Exception raised by kernel operators with node context.
Attributes:
| Name | Type | Description |
|---|---|---|
node |
ONNX node that caused the error |
|
message |
Error message |
Source code in brainsmith/dataflow/kernel_op.py
KernelSchema
dataclass
¶
KernelSchema(name: str, inputs: list[InputSchema] = list(), outputs: list[OutputSchema] = list(), internal_datatypes: dict[str, Any] = dict(), kernel_params: dict[str, tuple] = dict(), dse_parameters: dict[str, ParameterSpec] = dict(), constraints: list[Constraint] = list(), attribute_mapping: dict[str, str] = dict())
Kernel specification defining structure and validation.
Combines interface definitions, validation constraints, and design space parameters. Defines structure only - shapes come from ONNX context, execution logic lives in KernelOp.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Kernel name |
inputs |
list[InputSchema]
|
Input interface schemas |
outputs |
list[OutputSchema]
|
Output interface schemas |
internal_datatypes |
dict[str, Any]
|
Internal datatype derivation specs (e.g., accumulator) |
kernel_params |
dict[str, tuple]
|
Kernel-specific parameters (e.g., epsilon, algorithm) |
dse_parameters |
dict[str, ParameterSpec]
|
Explorable resource/implementation parameters (e.g., ram_style) |
constraints |
list[Constraint]
|
Validation constraints (datatype, shape, ONNX requirements) |
attribute_mapping |
dict[str, str]
|
Map ONNX attributes to kernel parameters |
attribute_mapping
class-attribute
instance-attribute
¶
Map ONNX attributes to kernel parameters.
Example: {"epsilon": "epsilon", "axis": "normalized_axis"}
dse_parameters
class-attribute
instance-attribute
¶
Explorable resource/implementation parameters (ram_style, res_type, etc.).
Tiling parameters (PE, SIMD) NOT declared here - auto-extracted from stream_tiling templates with defaults computed from factoring.
Example: {"ram_style": ParameterSpec("ram_style", {"distributed", "block"}, "distributed")}
__post_init__ ¶
Validate schema structure and transformation consistency.
build_nodeattr_registry ¶
Build nodeattr registry from schema definition.
Schemas define STRUCTURE, not STORAGE. Generates persistence layer from structural schema, returning only attributes that need persistence: - Datatypes (for interfaces and internals) - Tiling parameters (SIMD, PE, etc.) - auto-extracted from stream_tiling - DSE parameters (ram_style, res_type, etc.) - from dse_parameters - Kernel-specific parameters (epsilon, algorithm, etc.) - from kernel_params
Shapes are NEVER stored in nodeattrs. They are either: - Tensor shapes: extracted from ModelWrapper (ONNX graph) - Block/stream shapes: computed from schema templates
Returns:
| Name | Type | Description |
|---|---|---|
dict[str, tuple]
|
Dict mapping nodeattr name to (type, required, default_value) |
|
Format |
dict[str, tuple]
|
{"attrName": ("i"|"s"|"f", True|False, default)} |
Source code in brainsmith/dataflow/schemas.py
get_optimization_nodeattrs ¶
Get nodeattrs that affect optimization (re-explore if changed).
Optimization nodeattrs are those whose changes only require re-exploring the design space (trying different stream shapes), not rebuilding the entire design space.
These include: - Parallelization parameters (SIMD, PE, MW, MH, etc.): Appear in stream_tiling templates and determine stream shapes during DSE
Returns:
| Type | Description |
|---|---|
set
|
Set of optimization nodeattr names |
Example
schema.get_optimization_nodeattrs()
Source code in brainsmith/dataflow/schemas.py
get_structural_nodeattrs ¶
Get nodeattrs that affect design space (rebuild if changed).
Structural nodeattrs are those whose changes require rebuilding the entire KernelDesignSpace (not just reconfiguration).
These include: - All datatypes (input, output, internal): Affect internal datatype derivation (e.g., accumulator width depends on input datatype) - Parameters in block_tiling (rare): Affect block shape computation
Returns:
| Type | Description |
|---|---|
set
|
Set of structural nodeattr names |
Example
schema.get_structural_nodeattrs()
Source code in brainsmith/dataflow/schemas.py
validate ¶
Validate the schema structure.
Source code in brainsmith/dataflow/schemas.py
Example:
import brainsmith.dataflow as df
from brainsmith.dataflow import FULL_DIM
# Define kernel schema
LAYERNORM_SCHEMA = df.KernelSchema(
name="LayerNorm",
inputs=[
df.InputSchema(
name="input",
block_tiling=[FULL_DIM],
stream_tiling=["SIMD"],
required_layout="NHWC",
)
],
outputs=[
df.OutputSchema(
name="output",
block_tiling=[FULL_DIM],
stream_tiling=[df.derive_dim("input", df.ShapeHierarchy.STREAM, -1)],
required_layout="NHWC",
)
],
kernel_params={
"epsilon": ("f", True, 1e-5),
},
constraints=[
df.AttrCompare("epsilon", ">", 0),
],
)
InputSchema
dataclass
¶
InputSchema(name: str, block_tiling: TilingSpec | None = None, stream_tiling: TilingSpec | None = None, datatype: Any | None = None, required_layout: str | None = None)
Input interface specification.
Defines input structure (tiling) and requirements (layout, datatype).
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Interface name (e.g., "input", "input0") |
block_tiling |
TilingSpec | None
|
Block tiling specification (e.g., [FULL_DIM, FULL_DIM]) |
stream_tiling |
TilingSpec | None
|
Stream tiling specification (e.g., ["SIMD"], [1, 1, 1, "PE"]) |
datatype |
Any | None
|
Datatype spec (None to use from ONNX, or DatatypeSpec union type to derive/optimize) |
required_layout |
str | None
|
Expected input layout (e.g., "NHWC", "NCHW"), None if no requirement |
OutputSchema
dataclass
¶
OutputSchema(name: str, block_tiling: TilingSpec | None = None, stream_tiling: TilingSpec | None = None, datatype: Any | None = None, required_layout: str | None = None, preserves_input_layout: bool = True)
Output interface specification.
Defines output structure (tiling), datatype derivation, and layout requirements.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Interface name (e.g., "output", "output0") |
block_tiling |
TilingSpec | None
|
Block tiling specification |
stream_tiling |
TilingSpec | None
|
Stream tiling specification |
datatype |
Any | None
|
Datatype spec (None to use from ONNX, or DatatypeSpec union type to derive) |
required_layout |
str | None
|
Expected output layout (e.g., "NHWC"), None if no requirement |
preserves_input_layout |
bool
|
Whether output preserves first input's layout (default True) |
ParameterSpec
dataclass
¶
ParameterSpec(name: str, values: set[int | str] | Callable[[BuildContext], set[int | str]], type: Literal['int', 'string'] | None = None, default: int | str | None = None)
Explorable parameter in design space.
Represents resource allocation or implementation choices that can be explored during DSE (ram_style, res_type, mem_mode, etc.).
Does NOT include tiling dimensions (PE, SIMD) - those are auto-extracted from stream_tiling templates with valid values computed from factoring.
Container Type Convention (Ordered vs Discrete):
The container type determines how the dimension is treated during DSE:
-
list/tuple → OrderedParameter (ordered sequences with navigation)
- Supports min/max access, step_up/step_down, percentage-based indexing
- Values are sorted automatically
- Examples: depth=[128, 256, 512], num_layers=[1, 2, 4, 8]
-
set/frozenset → Discrete (unordered categories)
- Membership testing only, no navigation
- Order doesn't matter
- Examples: ram_style={"distributed", "block"}, res_type={"lut", "dsp"}
Type Declaration (Hybrid Approach):
- Literal values: Type inferred from first value (optional to specify)
- Callable values: Type MUST be explicitly specified
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Dimension name (e.g., "ram_style", "depth") |
values |
set[int | str] | Callable[[BuildContext], set[int | str]]
|
Valid values for this dimension - list/tuple: Ordered sequence (enables navigation methods) - set/frozenset: Discrete categories (membership only) - Callable: Computed from BuildContext (for context-dependent values) |
type |
Literal['int', 'string'] | None
|
Value type ("int" or "string") - Required for callable values - Optional for literal values (inferred from first value) - Validated against values if both provided |
default |
int | str | None
|
Default value (None = auto-select: min for ordered, first for discrete) |
Examples:
>>> # Ordered parameter - type inferred
>>> ParameterSpec("depth", [128, 256, 512, 1024], default=256)
>>> # Discrete parameter - type inferred
>>> ParameterSpec("ram_style", {"distributed", "block"}, default="distributed")
>>> # Callable parameter - type required
>>> ParameterSpec("depth", lambda ctx: compute_depths(ctx), type="int", default=256)
>>> # Explicit type for documentation (optional)
>>> ParameterSpec("mode", {"fast", "accurate"}, type="string")
Validation
- Callable values without type → ValueError
- Type mismatch with literal values → ValueError
- Invalid type (not "int" or "string") → ValueError
Note
Tiling dimensions (PE, SIMD) are ALWAYS ordered (auto-wrapped in OrderedParameter) since they're computed as divisors (naturally ordered sequences).
__post_init__ ¶
Validate type specification against values.
Source code in brainsmith/dataflow/schemas.py
Example:
import brainsmith.dataflow as df
# Ordered parameter (list/tuple enables navigation)
depth_param = df.ParameterSpec("depth", [128, 256, 512], default=256)
# Discrete parameter (set for unordered categories)
ram_param = df.ParameterSpec("ram_style", {"distributed", "block"}, default="distributed")
# Callable parameter (explicit type required)
dynamic_param = df.ParameterSpec("depth", lambda ctx: [128, 256], type="int", default=128)
# Use in kernel schema
schema = df.KernelSchema(
name="MyKernel",
inputs=[...],
outputs=[...],
dse_parameters={
"ram_style": ram_param,
"depth": depth_param,
}
)
KernelDesignSpace
dataclass
¶
KernelDesignSpace(name: str, inputs: dict[str, InterfaceDesignSpace], outputs: dict[str, InterfaceDesignSpace], internal_datatypes: dict[str, BaseDataType], optimization_constraints: list[Constraint], parameters: dict[str, Union[OrderedParameter, frozenset]])
Kernel design space built once, configured many times.
Built by DesignSpaceBuilder from ONNX context, acts as factory for KernelDesignPoint via configure(). Contains structure constant during DSE plus valid ranges for all explorable dimensions.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Kernel name |
inputs |
dict[str, InterfaceDesignSpace]
|
Input interface design spaces (by name) |
outputs |
dict[str, InterfaceDesignSpace]
|
Output interface design spaces (by name) |
internal_datatypes |
dict[str, BaseDataType]
|
Internal datatypes (e.g., accumulator) |
optimization_constraints |
list[Constraint]
|
Parametric constraints validated at configure() |
parameters |
dict[str, Union[OrderedParameter, frozenset]]
|
Explorable parameters - OrderedParameter (with navigation) or frozenset (discrete categories like ram_style) |
input_list
property
¶
Inputs in declaration order (for ONNX positional mapping).
Returns inputs as list preserving dict insertion order (Python 3.7+). Useful when mapping to ONNX node.input[i] positions.
output_list
property
¶
Outputs in declaration order (for ONNX positional mapping).
Returns outputs as list preserving dict insertion order (Python 3.7+). Useful when mapping to ONNX node.output[i] positions.
configure ¶
Instantiate kernel at specified point in design space.
Creates a KernelDesignPoint with resolved stream shapes and validates all parametric constraints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config |
dict[str, int | str]
|
Dimension values (tiling + resource) specifying the instance point |
required |
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
KernelDesignPoint with fully resolved configuration |
Raises:
| Type | Description |
|---|---|
ValueError
|
If config invalid or missing dimensions |
ValidationError
|
If parametric constraints fail |
Source code in brainsmith/dataflow/dse_models.py
get_ordered_parameter ¶
Get ordered parameter by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Parameter name |
required |
Returns:
| Type | Description |
|---|---|
OrderedParameter
|
OrderedParameter instance |
Raises:
| Type | Description |
|---|---|
KeyError
|
If parameter not found |
TypeError
|
If parameter is discrete (not ordered) |
Source code in brainsmith/dataflow/dse_models.py
get_parameter ¶
Get parameter by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Parameter name |
required |
Returns:
| Type | Description |
|---|---|
Union[OrderedParameter, frozenset]
|
OrderedParameter for ordered parameters, frozenset for discrete |
Raises:
| Type | Description |
|---|---|
KeyError
|
If parameter not found |
Source code in brainsmith/dataflow/dse_models.py
is_discrete_parameter ¶
Check if parameter is discrete.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Parameter name |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if parameter is discrete (frozenset), False if ordered |
Raises:
| Type | Description |
|---|---|
KeyError
|
If parameter not found |
Source code in brainsmith/dataflow/dse_models.py
is_ordered_parameter ¶
Check if parameter is ordered.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Parameter name |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if parameter is OrderedParameter, False if discrete (frozenset) |
Raises:
| Type | Description |
|---|---|
KeyError
|
If parameter not found |
Source code in brainsmith/dataflow/dse_models.py
InterfaceDesignSpace
dataclass
¶
InterfaceDesignSpace(name: str, tensor_shape: Shape, block_shape: Shape, stream_tiling: TilingSpec, datatype: BaseDataType, is_weight: bool = False, tensor_name: str | None = None, parallelism_dimension: OrderedParameter | None = None, parallelism_param: str | None = None)
Interface design space built once, configured many times.
Defines interface structure constant during DSE. Stream tiling preserved as template for resolution with specific parallelization parameters.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Interface name |
tensor_shape |
Shape
|
Full tensor dimensions |
block_shape |
Shape
|
Block dimensions (per-operation tile size) |
stream_tiling |
TilingSpec
|
Stream tiling template (e.g., ["SIMD"] or [1, 1, 1, "PE"]) |
datatype |
BaseDataType
|
Interface datatype |
is_weight |
bool
|
Whether this is a weight tensor (constant) |
tensor_name |
str | None
|
ONNX tensor name for initializer lookups |
parallelism_dimension |
OrderedParameter | None
|
OrderedParameter for stream parameter (None if no parallelism) |
parallelism_param |
str | None
|
Parameter name for stream dimension (e.g., "SIMD", "PE") |
KernelDesignPoint
dataclass
¶
KernelDesignPoint(design_space: KernelDesignSpace, inputs: dict[str, InterfaceDesignPoint], outputs: dict[str, InterfaceDesignPoint], config: dict[str, int | str])
Immutable kernel instance at specific design point.
Created by KernelDesignSpace.configure() with specific dimension values. Flyweight pattern minimizes memory - references parent design space, stores only configuration-specific data.
Navigation methods return new instances - the design point itself is immutable. Use with_dimension(), with_step_up(), sweep_dimension() to explore the space.
Attributes:
| Name | Type | Description |
|---|---|---|
design_space |
KernelDesignSpace
|
Parent KernelDesignSpace |
inputs |
dict[str, InterfaceDesignPoint]
|
Configured input interfaces (by name) |
outputs |
dict[str, InterfaceDesignPoint]
|
Configured output interfaces (by name) |
config |
dict[str, int | str]
|
Dimension values defining this point (e.g., {"SIMD": 16, "PE": 4}) |
input_list
property
¶
Inputs in declaration order (for ONNX positional mapping).
max_block_folding_factor
property
¶
Maximum block folding factor across all inputs.
max_tensor_folding_factor
property
¶
Maximum tensor folding factor across all inputs.
output_list
property
¶
Outputs in declaration order (for ONNX positional mapping).
get_input_stream_dimension ¶
Get parallelism dimension for input interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Input interface index (0-based) |
required |
Returns:
| Type | Description |
|---|---|
Optional[OrderedParameter]
|
OrderedParameter or None if no parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
Source code in brainsmith/dataflow/dse_models.py
get_input_stream_param ¶
Get parallelism parameter name for input interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Input interface index (0-based) |
required |
Returns:
| Type | Description |
|---|---|
str | None
|
Parameter name (e.g., "SIMD", "PE") or None if no parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
Source code in brainsmith/dataflow/dse_models.py
get_input_stream_value ¶
Get current parallelism value for input interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Input interface index (0-based) |
required |
Returns:
| Type | Description |
|---|---|
int | None
|
Current parallelism value or None if no parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
Source code in brainsmith/dataflow/dse_models.py
get_output_stream_dimension ¶
Get parallelism dimension for output interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Output interface index (0-based) |
required |
Returns:
| Type | Description |
|---|---|
Optional[OrderedParameter]
|
OrderedParameter or None if no parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
Source code in brainsmith/dataflow/dse_models.py
get_output_stream_param ¶
Get parallelism parameter name for output interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Output interface index (0-based) |
required |
Returns:
| Type | Description |
|---|---|
str | None
|
Parameter name or None if no parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
Source code in brainsmith/dataflow/dse_models.py
get_output_stream_value ¶
Get current parallelism value for output interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Output interface index (0-based) |
required |
Returns:
| Type | Description |
|---|---|
int | None
|
Current parallelism value or None if no parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
Source code in brainsmith/dataflow/dse_models.py
output_stream_shape ¶
Stream shape for output.
Returns the output's stream_shape attribute (resolved during configure).
output_stream_width_bits ¶
Stream width in bits for output.
Returns the actual stream width based on the output's stream_shape.
Source code in brainsmith/dataflow/dse_models.py
sweep_dimension ¶
sweep_dimension(name: str, start: int | str | None = None, stop: int | str | None = None) -> Iterator[KernelDesignPoint]
Sweep through all valid values for a dimension.
For ordered dimensions, iterates in order from start to stop. For discrete dimensions, iterates in sorted order (ignores start/stop).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Dimension to sweep |
required |
start |
int | str | None
|
Start value (None = use min/first), ordered dims only |
None
|
stop |
int | str | None
|
Stop value (None = use max/last), ordered dims only |
None
|
Yields:
| Type | Description |
|---|---|
KernelDesignPoint
|
KernelDesignPoint for each value in range |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
Examples:
>>> # Partial sweep (ordered)
>>> for point in base.sweep_dimension("SIMD", start=8, stop=64):
... evaluate(point)
>>> # Discrete sweep (ignores start/stop)
>>> for point in base.sweep_dimension("ram_style"):
... evaluate(point)
Source code in brainsmith/dataflow/dse_models.py
sweep_percentage ¶
sweep_percentage(name: str, percentages: list[float], rounding: Literal['natural', 'down', 'up'] = 'natural') -> Iterator[KernelDesignPoint]
Sweep through ordered dimension at specified percentage points.
Only valid for ordered dimensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Ordered dimension to sweep |
required |
percentages |
list[float]
|
List of percentage points (0.0-1.0) |
required |
rounding |
Literal['natural', 'down', 'up']
|
Rounding mode for fractional indices |
'natural'
|
Yields:
| Type | Description |
|---|---|
KernelDesignPoint
|
KernelDesignPoint for each percentage |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
TypeError
|
If dimension is discrete (not ordered) |
Examples:
>>> # Quartile sweep
>>> for point in base.sweep_percentage("PE", [0.0, 0.25, 0.5, 0.75, 1.0]):
... evaluate(point)
>>> # Decile sweep
>>> deciles = [i/10 for i in range(11)]
>>> for point in base.sweep_percentage("SIMD", deciles):
... evaluate(point)
Source code in brainsmith/dataflow/dse_models.py
with_dimension ¶
Create new design point with specified dimension value.
Works for both ordered and discrete dimensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Dimension name |
required |
value |
int | str
|
New value for dimension |
required |
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with updated dimension |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
ValueError
|
If value not valid for dimension |
Examples:
>>> point = design_space.configure({"SIMD": 4, "PE": 1})
>>> point2 = point.with_dimension("SIMD", 8)
>>> point2.config["SIMD"]
8
Source code in brainsmith/dataflow/dse_models.py
with_input_stream ¶
Set input interface stream parallelism by index.
Convenience method for interface-agnostic parallelism navigation. Automatically resolves the parallelism parameter name from the interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Input interface index (0-based) |
required |
value |
int
|
Parallelism value |
required |
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with updated parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
ValueError
|
If interface has no parallelism parameter or value invalid |
Example
Set first input to PE=16¶
point2 = point.with_input_stream(0, 16)
Source code in brainsmith/dataflow/dse_models.py
with_input_stream_percentage ¶
with_input_stream_percentage(index: int, percentage: float, rounding: Literal['natural', 'down', 'up'] = 'natural') -> KernelDesignPoint
Set input stream parallelism to percentage of range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Input interface index (0-based) |
required |
percentage |
float
|
Value from 0.0 to 1.0 (0.0=min, 1.0=max) |
required |
rounding |
Literal['natural', 'down', 'up']
|
How to round fractional indices |
'natural'
|
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with parallelism at percentage |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
ValueError
|
If interface has no parallelism parameter or percentage invalid |
Source code in brainsmith/dataflow/dse_models.py
with_max ¶
Create new design point with ordered dimension at maximum.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Dimension name (must be ordered) |
required |
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with dimension at maximum |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
TypeError
|
If dimension is discrete (not ordered) |
Examples:
>>> point = design_space.configure({"SIMD": 8, "PE": 4})
>>> point2 = point.with_max("SIMD")
>>> point2.config["SIMD"]
64
Source code in brainsmith/dataflow/dse_models.py
with_min ¶
Create new design point with ordered dimension at minimum.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Dimension name (must be ordered) |
required |
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with dimension at minimum |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
TypeError
|
If dimension is discrete (not ordered) |
Examples:
>>> point = design_space.configure({"SIMD": 8, "PE": 4})
>>> point2 = point.with_min("SIMD")
>>> point2.config["SIMD"]
1
Source code in brainsmith/dataflow/dse_models.py
with_output_stream ¶
Set output interface stream parallelism by index.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Output interface index (0-based) |
required |
value |
int
|
Parallelism value |
required |
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with updated parallelism |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
ValueError
|
If interface has no parallelism parameter or value invalid |
Source code in brainsmith/dataflow/dse_models.py
with_output_stream_percentage ¶
with_output_stream_percentage(index: int, percentage: float, rounding: Literal['natural', 'down', 'up'] = 'natural') -> KernelDesignPoint
Set output stream parallelism to percentage of range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index |
int
|
Output interface index (0-based) |
required |
percentage |
float
|
Value from 0.0 to 1.0 (0.0=min, 1.0=max) |
required |
rounding |
Literal['natural', 'down', 'up']
|
How to round fractional indices |
'natural'
|
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with parallelism at percentage |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
ValueError
|
If interface has no parallelism parameter or percentage invalid |
Source code in brainsmith/dataflow/dse_models.py
with_percentage ¶
Create new design point with ordered dimension at percentage.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Dimension name (must be ordered) |
required |
percentage |
float
|
Position in range [0.0, 1.0] |
required |
rounding |
str
|
'natural', 'down', or 'up' |
'natural'
|
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with dimension at percentage |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
TypeError
|
If dimension is discrete (not ordered) |
ValueError
|
If percentage out of range |
Examples:
>>> point = design_space.configure({"SIMD": 4, "PE": 1})
>>> point2 = point.with_percentage("SIMD", 0.5)
>>> point2.config["SIMD"]
8
Source code in brainsmith/dataflow/dse_models.py
with_step_down ¶
Create new design point with ordered dimension stepped down.
Clamps at minimum if n steps would go below bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Dimension name (must be ordered) |
required |
n |
int
|
Number of steps to move down (default 1) |
1
|
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with dimension stepped down |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
TypeError
|
If dimension is discrete (not ordered) |
ValueError
|
If current value not in dimension or n < 0 |
Examples:
>>> point = design_space.configure({"SIMD": 16, "PE": 4})
>>> point2 = point.with_step_down("SIMD", 1)
>>> point2.config["SIMD"]
8
Source code in brainsmith/dataflow/dse_models.py
with_step_up ¶
Create new design point with ordered dimension stepped up.
Clamps at maximum if n steps would exceed bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Dimension name (must be ordered) |
required |
n |
int
|
Number of steps to move up (default 1) |
1
|
Returns:
| Type | Description |
|---|---|
KernelDesignPoint
|
New KernelDesignPoint with dimension stepped up |
Raises:
| Type | Description |
|---|---|
KeyError
|
If dimension not found |
TypeError
|
If dimension is discrete (not ordered) |
ValueError
|
If current value not in dimension or n < 0 |
Examples:
>>> point = design_space.configure({"SIMD": 4, "PE": 1})
>>> point2 = point.with_step_up("SIMD", 2)
>>> point2.config["SIMD"]
16
Source code in brainsmith/dataflow/dse_models.py
Example:
# Get design point from kernel operator
op._ensure_ready(model)
point = op.design_point
# Configure using interface-based API (for stream parameters)
point = point.with_input_stream(0, 32) # Set input PE=32
point = point.with_output_stream(0, 16) # Set output PE=16
# Configure using dimension-based API (for generic DSE)
point = point.with_dimension("ram_style", "distributed")
point = point.with_dimension("depth", 256)
# Apply configuration
op.apply_design_point(point)
InterfaceDesignPoint
dataclass
¶
Interface instance with resolved parallelization.
Flyweight pattern: references parent design space, stores only configuration- specific stream_shape. Delegates tensor_shape, block_shape, and datatype to design space for minimal memory overhead.
Attributes:
| Name | Type | Description |
|---|---|---|
design_space |
InterfaceDesignSpace
|
Parent InterfaceDesignSpace |
stream_shape |
Shape
|
Resolved stream dimensions for this configuration |
block_folding_factor
property
¶
Cycles to stream one block.
Product of stream_cycles_shape. Uses ceiling division: a block of size 32 with stream width 10 requires ceil(32/10) = 4 cycles (3 full + 1 partial).
stream_cycles_shape
property
¶
Per-dimension cycles needed to stream one block.
Returns shape where each dimension is ceil(block_dim / stream_dim). Describes temporal execution: how we stream each tile.
block_shape=(32, 16), stream_shape=(8, 4)
→ stream_cycles_shape=(4, 4) # 4x4 cycles per block
tensor_blocks_shape
property
¶
Per-dimension blocks needed to tile tensor.
Returns shape where each dimension is ceil(tensor_dim / block_dim). Describes spatial decomposition: how we tile the problem.
tensor_shape=(100, 64), block_shape=(32, 16)
→ tensor_blocks_shape=(4, 4) # 4x4 grid of blocks
tensor_folding_factor
property
¶
Number of blocks needed to cover full tensor.
Product of tensor_blocks_shape. Uses ceiling division: a tensor of size 100 with block size 32 requires ceil(100/32) = 4 blocks (3 full + 1 partial).
get_shape ¶
Get shape at specified hierarchy level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
hierarchy |
ShapeHierarchy
|
Which level of the shape hierarchy to retrieve |
required |
Returns:
| Type | Description |
|---|---|
Shape
|
Shape at the specified level |
Raises:
| Type | Description |
|---|---|
ValueError
|
If hierarchy is invalid |
Source code in brainsmith/dataflow/dse_models.py
OrderedParameter
dataclass
¶
Ordered parameter for DSE navigation.
Stores discrete values in sorted order, enabling navigation operations like stepping, percentage-based indexing, and min/max access.
Used for parallelization parameters (PE, SIMD, MW, MH) and other explorable parameters with natural ordering (depth, num_layers, etc.).
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Parameter name (e.g., "SIMD", "PE", "depth") |
values |
tuple[int, ...]
|
Sorted tuple of valid values |
default |
int | None
|
Default value (None = minimum) |
Examples:
>>> simd = OrderedParameter("SIMD", (1, 2, 4, 8, 16, 32, 64))
>>> simd.min()
1
>>> simd.at_percentage(0.5)
8
>>> simd.step_up(8, n=2)
32
__contains__ ¶
__iter__ ¶
__len__ ¶
__post_init__ ¶
Validate invariants: sorted, unique, non-empty.
Source code in brainsmith/dataflow/ordered_parameter.py
__repr__ ¶
String representation.
Source code in brainsmith/dataflow/ordered_parameter.py
at_index ¶
Get value at index (supports negative indexing).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
idx |
int
|
Index position (0-based, supports negative like Python lists) |
required |
Returns:
| Type | Description |
|---|---|
int
|
Value at index |
Raises:
| Type | Description |
|---|---|
IndexError
|
If index out of range |
Examples:
>>> param = OrderedParameter("PE", (1, 2, 4, 8, 16))
>>> param.at_index(0)
1
>>> param.at_index(-1)
16
>>> param.at_index(2)
4
Source code in brainsmith/dataflow/ordered_parameter.py
at_percentage ¶
Get value at percentage position in ordered sequence (0.0-1.0).
Maps percentage to continuous index space, then rounds to discrete index. Useful for sweeping through parameter at regular intervals regardless of actual vector length.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
percentage |
float
|
Position in range [0.0, 1.0] - 0.0 → first value (min) - 1.0 → last value (max) - 0.5 → middle value |
required |
rounding |
Literal['natural', 'down', 'up']
|
How to round fractional indices - 'natural': Round to nearest (default, balanced) - 'down': Floor (conservative, prefer smaller values) - 'up': Ceiling (aggressive, prefer larger values) |
'natural'
|
Returns:
| Type | Description |
|---|---|
int
|
Value at percentage position |
Raises:
| Type | Description |
|---|---|
ValueError
|
If percentage not in [0.0, 1.0] or invalid rounding mode |
Examples:
>>> param = OrderedParameter("PE", (1, 2, 4, 8, 16)) # 5 values
>>> param.at_percentage(0.0)
1
>>> param.at_percentage(1.0)
16
>>> param.at_percentage(0.5, rounding='natural')
4 # Middle value (index 2 of 0-4)
>>> param.at_percentage(0.75, rounding='down')
8 # 0.75 * 4 = 3.0 → floor(3.0) = 3 → values[3] = 8
>>> # With 4 values, percentages map cleanly to indices
>>> param4 = OrderedParameter("X", (10, 20, 30, 40))
>>> param4.at_percentage(0.0)
10 # 0.0 * 3 = 0
>>> param4.at_percentage(0.333, rounding='natural')
20 # 0.333 * 3 ≈ 1.0 → round(1.0) = 1
>>> param4.at_percentage(1.0)
40 # 1.0 * 3 = 3
Source code in brainsmith/dataflow/ordered_parameter.py
get_default ¶
index_of ¶
Get index of value in ordered sequence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value |
int
|
Value to find |
required |
Returns:
| Type | Description |
|---|---|
int
|
Zero-based index of value |
Raises:
| Type | Description |
|---|---|
ValueError
|
If value not in parameter |
Examples:
>>> param = OrderedParameter("SIMD", (1, 2, 4, 8, 16))
>>> param.index_of(4)
2
>>> param.index_of(16)
4
Source code in brainsmith/dataflow/ordered_parameter.py
max ¶
min ¶
step_down ¶
Step down n positions from current value.
Clamps at minimum if n steps would go below bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
current |
int
|
Current value (must be in parameter) |
required |
n |
int
|
Number of steps to move down (positive integer) |
1
|
Returns:
| Type | Description |
|---|---|
int
|
New value n steps down (clamped at min) |
Raises:
| Type | Description |
|---|---|
ValueError
|
If current value not in parameter or n < 0 |
Examples:
>>> param = OrderedParameter("SIMD", (1, 2, 4, 8, 16, 32, 64))
>>> param.step_down(16, 1)
8
>>> param.step_down(16, 2)
4
>>> param.step_down(4, 10)
1 # Clamped at min
Source code in brainsmith/dataflow/ordered_parameter.py
step_up ¶
Step up n positions from current value.
Clamps at maximum if n steps would exceed bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
current |
int
|
Current value (must be in parameter) |
required |
n |
int
|
Number of steps to move up (positive integer) |
1
|
Returns:
| Type | Description |
|---|---|
int
|
New value n steps up (clamped at max) |
Raises:
| Type | Description |
|---|---|
ValueError
|
If current value not in parameter or n < 0 |
Examples:
>>> param = OrderedParameter("PE", (1, 2, 4, 8, 16, 32, 64))
>>> param.step_up(4, 1)
8
>>> param.step_up(4, 2)
16
>>> param.step_up(32, 10)
64 # Clamped at max
Source code in brainsmith/dataflow/ordered_parameter.py
DesignSpaceBuilder ¶
Builds kernel design space from schema and ONNX context.
Two-phase construction: 1. build() creates KernelDesignSpace once (tensor/block shapes, datatypes, valid ranges) 2. design_space.configure() creates KernelDesignPoint many times (stream shapes for specific params)
Example
builder = DesignSpaceBuilder() context = BuildContext( ... schema=kernel_schema, ... model_w=model_wrapper, ... node_inputs=list(node.input), ... node_outputs=list(node.output), ... param_getter=self.get_nodeattr, ... param_setter=self.set_nodeattr, ... node_name=node.name ... ) design_space = builder.build(context) point = design_space.configure({"SIMD": 64, "PE": 1})
build ¶
Build kernel design space from ONNX context.
Resolves all properties constant across parallelization configs: - Tensor shapes (from ONNX graph) - Block shapes (from block_tiling templates) - Datatypes (from ONNX graph + union type derivation) - Internal datatypes (from union type derivation) - Structural constraints (validated once) - Valid parallelization parameter ranges (divisor sets)
Stream shapes are left as templates for later resolution via configure().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ctx |
BuildContext
|
Build context with ONNX node and ModelWrapper |
required |
Returns:
| Type | Description |
|---|---|
KernelDesignSpace
|
KernelDesignSpace ready for configuration exploration |
Raises:
| Type | Description |
|---|---|
ValueError
|
If structural constraints fail |
Source code in brainsmith/dataflow/builder.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 | |
BuildContext
dataclass
¶
BuildContext(schema: KernelSchema, model_w: ModelWrapper, node_inputs: list[str], node_outputs: list[str], param_getter: Callable[[str], Any], param_setter: Callable[[str, Any], None], node_name: str = '<unknown>')
Build context for kernel design space construction.
Encapsulates all data needed to build a KernelDesignSpace from a schema.
Attributes:
| Name | Type | Description |
|---|---|---|
schema |
KernelSchema
|
KernelSchema defining structure |
model_w |
ModelWrapper
|
ModelWrapper for ONNX graph access |
node_inputs |
list[str]
|
ONNX node input tensor names |
node_outputs |
list[str]
|
ONNX node output tensor names |
param_getter |
Callable[[str], Any]
|
Function to retrieve nodeattr values |
param_setter |
Callable[[str, Any], None]
|
Function to store nodeattr values |
node_name |
str
|
Node name for error messages |
Constraint ¶
Bases: Protocol
Validation rule for kernel constraints.
Pure predicate that validates kernel properties during construction. Uses duck typing to work with any validation context providing required methods.
Required methods: - check(ctx) → Optional[str] - describe() → str
evaluation_phase
property
¶
When to evaluate this constraint during kernel construction.
Returns:
| Type | Description |
|---|---|
str
|
'structural' - Evaluated once during design space construction (Phase 1) Constraints that determine backend compatibility (tensor shapes, block shapes, datatypes, etc.) |
str
|
'optimization' - Evaluated per-configuration during configure() (Phase 2) Constraints that bound optimization space (stream shapes, parallelization parameters, etc.) |
Default implementation uses heuristic: - Constraints with hierarchy == STREAM are optimization constraints - All other constraints are structural
Subclasses can override this property for explicit classification.
Examples:
DatatypeInteger: 'structural' (no hierarchy, datatype determines compatibility) ShapesEqual(hierarchy=TENSOR): 'structural' (tensor shape determines compatibility) ShapesEqual(hierarchy=BLOCK): 'structural' (block shape determines compatibility) ShapesEqual(hierarchy=STREAM): 'optimization' (stream shape bounds optimization) DimensionDivisible(hierarchy=STREAM): 'optimization' (stream dim bounds optimization)
check ¶
Check constraint in given context.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ctx |
Validation context (DesignSpaceValidationContext or ConfigurationValidationContext) |
required |
Returns:
| Type | Description |
|---|---|
str | None
|
None if satisfied, error message string if violated |
Source code in brainsmith/dataflow/constraints.py
ValidationError ¶
Bases: ValueError
Validation error with context and suggestions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message |
str
|
Error message |
required |
location |
str
|
Optional context (e.g., "input.stream[1]") |
''
|
suggestions |
list
|
Optional list of suggestions |
None
|
Examples:
>>> raise ValidationError("Invalid parameter")
>>> raise ValidationError("PE must divide 768", location="output.stream[1]")
Source code in brainsmith/dataflow/validation.py
DesignSpaceValidationContext
dataclass
¶
DesignSpaceValidationContext(inputs: dict[str, Any], outputs: dict[str, Any], internal_datatypes: dict[str, DataType], param_getter: Callable[[str], Any] | None = None)
Validation context for structural constraints during design space build.
Used during KernelDesignSpace construction to validate tensor shapes, block shapes, and datatypes. Stream shapes not available until configure().
Attributes:
| Name | Type | Description |
|---|---|---|
inputs |
dict[str, Any]
|
Input interfaces (InterfaceDesignSpace) |
outputs |
dict[str, Any]
|
Output interfaces (InterfaceDesignSpace) |
internal_datatypes |
dict[str, DataType]
|
Internal datatypes |
param_getter |
Callable[[str], Any] | None
|
Optional nodeattr getter |
Example
ctx = DesignSpaceValidationContext( inputs=interfaces_input, outputs=interfaces_output, internal_datatypes=internal_datatypes, param_getter=get_nodeattr ) for constraint in structural_constraints: if error := constraint.check(ctx): raise ValueError(error)
get_datatype ¶
Get datatype from interface or internal datatypes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Interface or internal datatype name |
required |
Returns:
| Type | Description |
|---|---|
DataType
|
DataType |
Raises:
| Type | Description |
|---|---|
KeyError
|
If interface/datatype not found |
Source code in brainsmith/dataflow/validation.py
get_param ¶
Get kernel parameter value.
Note: Primarily for rare block_tiling parameters. Most parameters are stream_tiling and only available during configure().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Parameter name |
required |
Returns:
| Type | Description |
|---|---|
Any
|
Parameter value |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If no param_getter provided |
KeyError
|
If parameter not found |
Source code in brainsmith/dataflow/validation.py
get_shape ¶
Get shape at hierarchy level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Interface name |
required |
hierarchy |
ShapeHierarchy
|
Which level of hierarchy (TENSOR or BLOCK only) |
TENSOR
|
Returns:
| Type | Description |
|---|---|
tuple[int, ...]
|
Shape tuple |
Raises:
| Type | Description |
|---|---|
KeyError
|
If interface not found |
RuntimeError
|
If STREAM hierarchy requested (not available in design space) |
Source code in brainsmith/dataflow/validation.py
is_dynamic ¶
Check if interface is dynamic (no initializer).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Interface name |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if dynamic (activations), False if static (weights) |
Source code in brainsmith/dataflow/validation.py
ConfigurationValidationContext
dataclass
¶
Validation context for optimization constraints during configure().
Used during KernelDesignSpace.configure() to validate constraints on stream shapes and parallelization parameters.
Attributes:
| Name | Type | Description |
|---|---|---|
configured_model |
Any
|
KernelDesignPoint with configured interfaces |
params |
dict[str, int]
|
Parallelization parameters |
Example
ctx = ConfigurationValidationContext( configured_model=instance, params=params ) for constraint in parametric_constraints: if error := constraint.check(ctx): raise ValueError(error)
get_datatype ¶
Get datatype from interface or internal datatypes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Interface or internal datatype name |
required |
Returns:
| Type | Description |
|---|---|
DataType
|
DataType |
Raises:
| Type | Description |
|---|---|
KeyError
|
If interface/datatype not found |
Source code in brainsmith/dataflow/validation.py
get_param ¶
Get kernel parameter value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Parameter name (e.g., "SIMD", "PE", "epsilon") |
required |
Returns:
| Type | Description |
|---|---|
Any
|
Parameter value |
Raises:
| Type | Description |
|---|---|
KeyError
|
If parameter not found |
Source code in brainsmith/dataflow/validation.py
get_shape ¶
Get shape at hierarchy level.
All hierarchies available (TENSOR, BLOCK, STREAM).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Interface name |
required |
hierarchy |
ShapeHierarchy
|
Which level of hierarchy |
TENSOR
|
Returns:
| Type | Description |
|---|---|
tuple[int, ...]
|
Shape tuple |
Raises:
| Type | Description |
|---|---|
KeyError
|
If interface not found |
Source code in brainsmith/dataflow/validation.py
is_dynamic ¶
Check if interface is dynamic (no initializer).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name |
str
|
Interface name |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if dynamic (activations), False if static (weights) |
Source code in brainsmith/dataflow/validation.py
TransformationResult
dataclass
¶
TransformationResult(nodes_to_insert: list[NodeProto], nodes_to_remove: list[NodeProto], metadata: dict[str, Any] = dict())
Result of ONNX node to hardware kernel transformation.
Attributes:
| Name | Type | Description |
|---|---|---|
nodes_to_insert |
list[NodeProto]
|
HW nodes to insert into graph |
nodes_to_remove |
list[NodeProto]
|
ONNX nodes to remove from graph |
metadata |
dict[str, Any]
|
Optional transformation metadata |
Example:
import brainsmith.dataflow as df
from onnx import helper
# Create transformation result when converting ONNX to HW node
hw_node = helper.make_node(
"LayerNorm",
inputs=list(node.input),
outputs=list(node.output),
domain="brainsmith.kernels",
name=f"LayerNorm_{node.name}",
)
result = df.TransformationResult(
nodes_to_insert=[hw_node],
nodes_to_remove=[node]
)
ShapeHierarchy ¶
Bases: Enum
Shape hierarchy level for constraints and relationships.
Attributes:
| Name | Type | Description |
|---|---|---|
STREAM |
Stream shape (parallelism, elements per cycle) |
|
BLOCK |
Block shape (tiling dimensions) |
|
TENSOR |
Tensor shape (full logical dimensions) |
See Also¶
- Component Registry - Register custom kernels, backends, and steps
- Getting Started - Installation and quickstart
- GitHub - Issues and questions