vllm.ir ¶
Modules:
| Name | Description |
|---|---|
op | |
ops | |
util | |
enable_torch_wrap ¶
enable_torch_wrap(enable: bool = True)
Context manager to enable/disable torch custom op wrapping for vLLM IR ops. When torch wrapping is disabled, the torch custom op layer is skipped and IR ops dispatch directly to the implementation. Helpful for avoiding torch dispatch overhead in eager mode and avoiding the need for lowering for platforms not using Inductor.
Source code in vllm/ir/op.py
register_op ¶
register_op(
f: Callable | None = None,
*,
name: str | None = None,
activations: list[str] | None = None,
allow_inplace: bool = False,
) -> IrOp | Callable[[Callable], IrOp]
Register a new vLLM IR op.
:param f: the native implementation of the op :param name: the name of the op, defaults to the function name :param activations: list of activation params, defaults to params starting with 'x' :param allow_inplace: add a maybe_inplace overload that allows inplace impls :return: the IrOp object if f is provided, otherwise a decorator
Example usage: ```python @vllm.ir.register_op def my_add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: return x + y
@vllm.ir.register_op(name="custom_mul") def multiply(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: return x * y