readme: add high level detail around IPC#10592
readme: add high level detail around IPC#10592lgirdwood wants to merge 1 commit intothesofproject:mainfrom
Conversation
High level information about how IPC works and some specifics for IPC3 and IPC4 protocols. Signed-off-by: Liam Girdwood <liam.r.girdwood@linux.intel.com>
There was a problem hiding this comment.
Pull request overview
Adds new high-level documentation for the IPC core layer plus separate architecture overviews for IPC3 and IPC4, intended to help readers understand how mailbox interrupts are routed and how protocol-specific handlers process messages.
Changes:
- Added
src/ipc/readme.mddescribing the core IPC layer responsibilities and processing flows. - Added
src/ipc/ipc3/readme.mddocumenting IPC3 command routing and example flows. - Added
src/ipc/ipc4/readme.mddocumenting IPC4 dispatch, pipeline state handling, module binding, and compound messages.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 11 comments.
| File | Description |
|---|---|
| src/ipc/readme.md | New core IPC architecture/flow documentation with diagrams and helper object notes. |
| src/ipc/ipc4/readme.md | New IPC4-specific overview covering dispatch, pipelines, binding, and compound messaging. |
| src/ipc/ipc3/readme.md | New IPC3-specific overview covering command routing, stream trigger, DAI config, and mailbox validation. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| 1. **Message State Management**: Tracking if a message is being processed, queued, or completed. | ||
| 2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule. | ||
| 3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` or `sof_task` items. |
There was a problem hiding this comment.
The doc references sof_task items, but there’s no sof_task symbol/type in this repository. If this is meant to refer to the EDF schedule_task/task framework (e.g., ipc->ipc_task), please rename it accordingly or remove backticks to avoid implying a concrete API.
| 3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` or `sof_task` items. | |
| 3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` items or SOF scheduler tasks. |
| ## Global IPC Objects and Helpers | ||
|
|
||
| * `ipc_comp_dev`: Wrapper structure linking generic devices (`comp_dev`) specifically to their IPC pipeline and endpoint identifiers. | ||
| * `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components by ID or relative upstream/downstream connection strings. |
There was a problem hiding this comment.
ipc_get_ppl_comp is described as using “upstream/downstream connection strings”, but the implementation takes a pipeline_id and a direction flag and walks the component/buffer graph (no string-based lookup). Please adjust the wording to match the actual API to prevent confusion about how pipeline endpoints are identified.
| * `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components by ID or relative upstream/downstream connection strings. | |
| * `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components either directly by component ID or by traversing the pipeline graph starting from a given `pipeline_id` and direction (upstream/downstream). |
| DSP->>Queue: ipc_msg_send() / ipc_msg_send_direct() | ||
| activate Queue | ||
| Queue-->>Queue: Add to Tx list (if BUSY) | ||
| Queue->>Platform: Tries ipc_copy_msg_to_host() |
There was a problem hiding this comment.
The send-flow diagram references ipc_copy_msg_to_host(), but that symbol doesn’t exist in the codebase. To avoid misleading readers, please rename this step to the actual IPC Tx path (e.g., ipc_send_queued_msg() / ipc_platform_send_msg()), or describe it generically as “copy payload to mailbox and send”.
| Queue->>Platform: Tries ipc_copy_msg_to_host() | |
| Queue->>Platform: Copy payload to mailbox and send |
| subgraph Module Handler | ||
| Mod --> InitMod[ipc4_init_module_instance] | ||
| Mod --> SetMod[ipc4_set_module_params] | ||
| Mod --> GetMod[ipc4_get_module_params] | ||
| Mod --> Bind[ipc4_bind] | ||
| Mod --> Unbind[ipc4_unbind] | ||
| end |
There was a problem hiding this comment.
Several function names in this dispatch diagram don’t exist in the IPC4 implementation (ipc4_bind, ipc4_unbind, ipc4_set_module_params, ipc4_get_module_params). The actual handlers are named ipc4_bind_module_instance, ipc4_unbind_module_instance, and config is handled via ipc4_set_get_config_module_instance / ipc4_get_large_config_module_instance (etc.). Please update the diagram labels to match real symbols or make them intentionally generic (e.g., “bind module instance”).
| Triggering is strictly hierarchical via IPC3. It expects pipelines built and components fully parsed prior to active streaming commands. | ||
|
|
||
| 1. **Validation**: The IPC fetches the host component ID. | ||
| 2. **Device Loopup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline. |
There was a problem hiding this comment.
Typo: “Device Loopup” should be “Device Lookup”.
| 2. **Device Loopup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline. | |
| 2. **Device Lookup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline. |
| IPC3->>IPC3: ipc_get_comp_dev(stream_id) | ||
| IPC3->>Pipe: pipeline_trigger(COMP_TRIGGER_START) |
There was a problem hiding this comment.
The sequence diagram calls ipc_get_comp_dev(stream_id), but the actual call in ipc_stream_trigger() is ipc_get_comp_by_id(ipc, stream.comp_id) (and ipc_get_comp_dev has a different signature). Please adjust the diagram to match the implementation or keep it function-name-agnostic.
| The Core IPC layer is completely agnostic to the specific structure or content of the messages (whether they are IPC3 stream commands or IPC4 pipeline messages). Its primary responsibilities are: | ||
|
|
||
| 1. **Message State Management**: Tracking if a message is being processed, queued, or completed. | ||
| 2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule. |
There was a problem hiding this comment.
Grammar: “Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule” reads incorrectly ("schedule" is a noun/verb mismatch). Consider rephrasing to reference the scheduler (e.g., “thread-domain scheduler”) or “scheduling context” for clarity.
| 2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule. | |
| 2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread-domain scheduler. |
| The primary entry point is `ipc4_process_glb_message()`. This function determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module. | ||
|
|
||
| ```mermaid | ||
| graph TD | ||
| Mailbox[IPC Mailbox Interrupt] --> CoreIPC[Core IPC Worker] | ||
|
|
||
| CoreIPC --> Valid[ipc4_get_message_request] | ||
| Valid --> Disp[ipc4_process_glb_message] |
There was a problem hiding this comment.
This describes ipc4_process_glb_message() as the primary entry point, but the IPC4 handler entry point is ipc_cmd() (which then dispatches to ipc4_process_glb_message() for FW_GEN messages). Please adjust the wording so readers don’t look for a non-entry public API.
| The primary entry point is `ipc4_process_glb_message()`. This function determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module. | |
| ```mermaid | |
| graph TD | |
| Mailbox[IPC Mailbox Interrupt] --> CoreIPC[Core IPC Worker] | |
| CoreIPC --> Valid[ipc4_get_message_request] | |
| Valid --> Disp[ipc4_process_glb_message] | |
| IPC4 messages are received via the generic IPC handler entry point `ipc_cmd()`. For IPC4 FW_GEN (global) messages, `ipc_cmd()` dispatches to `ipc4_process_glb_message()`, which then determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module. | |
| ```mermaid | |
| graph TD | |
| Mailbox[IPC Mailbox Interrupt] --> CoreIPC[ipc_cmd] | |
| CoreIPC --> TypeSel[Decode IPC Message Type] | |
| TypeSel -->|IPC4 FW_GEN| Disp[ipc4_process_glb_message] |
| IPC4Bind->>SrcMod: comp_set_sink(Buffer) | ||
| IPC4Bind->>SinkMod: comp_set_source(Buffer) |
There was a problem hiding this comment.
In the ipc4_bind sequence diagram, the steps call comp_set_sink() / comp_set_source(), but these functions don’t exist in the codebase (binding is done via buffer/graph connect + comp_bind/comp_buffer_connect flows). Please rename these calls to the actual APIs used or describe them as conceptual “bind source/sink pins” operations.
| IPC4Bind->>SrcMod: comp_set_sink(Buffer) | |
| IPC4Bind->>SinkMod: comp_set_source(Buffer) | |
| IPC4Bind->>SrcMod: Bind source pin to Buff (via comp_bind/comp_buffer_connect) | |
| IPC4Bind->>SinkMod: Bind sink pin to Buff (via comp_bind/comp_buffer_connect) |
|
|
||
| ## Command Structure and Routing | ||
|
|
||
| Every message received is placed into an Rx buffer and initially routed to `ipc_cmd_handler()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems: |
There was a problem hiding this comment.
ipc_cmd_handler() is referenced as the initial routing function, but there is no such symbol in IPC3; the handler entry point is ipc_cmd(). Please update the text to point at the correct entry function so the docs stay grep-able.
| Every message received is placed into an Rx buffer and initially routed to `ipc_cmd_handler()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems: | |
| Every message received is placed into an Rx buffer and initially routed to `ipc_cmd()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems: |
High level information about how IPC works and some specifics for IPC3 and IPC4 protocols.