Skip to content

readme: add high level detail around IPC#10592

Open
lgirdwood wants to merge 1 commit intothesofproject:mainfrom
lgirdwood:readme-ipc
Open

readme: add high level detail around IPC#10592
lgirdwood wants to merge 1 commit intothesofproject:mainfrom
lgirdwood:readme-ipc

Conversation

@lgirdwood
Copy link
Member

High level information about how IPC works and some specifics for IPC3 and IPC4 protocols.

High level information about how IPC works and some specifics for
IPC3 and IPC4 protocols.

Signed-off-by: Liam Girdwood <liam.r.girdwood@linux.intel.com>
Copilot AI review requested due to automatic review settings March 3, 2026 16:41
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds new high-level documentation for the IPC core layer plus separate architecture overviews for IPC3 and IPC4, intended to help readers understand how mailbox interrupts are routed and how protocol-specific handlers process messages.

Changes:

  • Added src/ipc/readme.md describing the core IPC layer responsibilities and processing flows.
  • Added src/ipc/ipc3/readme.md documenting IPC3 command routing and example flows.
  • Added src/ipc/ipc4/readme.md documenting IPC4 dispatch, pipeline state handling, module binding, and compound messages.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 11 comments.

File Description
src/ipc/readme.md New core IPC architecture/flow documentation with diagrams and helper object notes.
src/ipc/ipc4/readme.md New IPC4-specific overview covering dispatch, pipelines, binding, and compound messaging.
src/ipc/ipc3/readme.md New IPC3-specific overview covering command routing, stream trigger, DAI config, and mailbox validation.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


1. **Message State Management**: Tracking if a message is being processed, queued, or completed.
2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule.
3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` or `sof_task` items.
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc references sof_task items, but there’s no sof_task symbol/type in this repository. If this is meant to refer to the EDF schedule_task/task framework (e.g., ipc->ipc_task), please rename it accordingly or remove backticks to avoid implying a concrete API.

Suggested change
3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` or `sof_task` items.
3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` items or SOF scheduler tasks.

Copilot uses AI. Check for mistakes.
## Global IPC Objects and Helpers

* `ipc_comp_dev`: Wrapper structure linking generic devices (`comp_dev`) specifically to their IPC pipeline and endpoint identifiers.
* `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components by ID or relative upstream/downstream connection strings.
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ipc_get_ppl_comp is described as using “upstream/downstream connection strings”, but the implementation takes a pipeline_id and a direction flag and walks the component/buffer graph (no string-based lookup). Please adjust the wording to match the actual API to prevent confusion about how pipeline endpoints are identified.

Suggested change
* `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components by ID or relative upstream/downstream connection strings.
* `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components either directly by component ID or by traversing the pipeline graph starting from a given `pipeline_id` and direction (upstream/downstream).

Copilot uses AI. Check for mistakes.
DSP->>Queue: ipc_msg_send() / ipc_msg_send_direct()
activate Queue
Queue-->>Queue: Add to Tx list (if BUSY)
Queue->>Platform: Tries ipc_copy_msg_to_host()
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The send-flow diagram references ipc_copy_msg_to_host(), but that symbol doesn’t exist in the codebase. To avoid misleading readers, please rename this step to the actual IPC Tx path (e.g., ipc_send_queued_msg() / ipc_platform_send_msg()), or describe it generically as “copy payload to mailbox and send”.

Suggested change
Queue->>Platform: Tries ipc_copy_msg_to_host()
Queue->>Platform: Copy payload to mailbox and send

Copilot uses AI. Check for mistakes.
Comment on lines +30 to +36
subgraph Module Handler
Mod --> InitMod[ipc4_init_module_instance]
Mod --> SetMod[ipc4_set_module_params]
Mod --> GetMod[ipc4_get_module_params]
Mod --> Bind[ipc4_bind]
Mod --> Unbind[ipc4_unbind]
end
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several function names in this dispatch diagram don’t exist in the IPC4 implementation (ipc4_bind, ipc4_unbind, ipc4_set_module_params, ipc4_get_module_params). The actual handlers are named ipc4_bind_module_instance, ipc4_unbind_module_instance, and config is handled via ipc4_set_get_config_module_instance / ipc4_get_large_config_module_instance (etc.). Please update the diagram labels to match real symbols or make them intentionally generic (e.g., “bind module instance”).

Copilot uses AI. Check for mistakes.
Triggering is strictly hierarchical via IPC3. It expects pipelines built and components fully parsed prior to active streaming commands.

1. **Validation**: The IPC fetches the host component ID.
2. **Device Loopup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline.
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: “Device Loopup” should be “Device Lookup”.

Suggested change
2. **Device Loopup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline.
2. **Device Lookup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline.

Copilot uses AI. Check for mistakes.
Comment on lines +63 to +64
IPC3->>IPC3: ipc_get_comp_dev(stream_id)
IPC3->>Pipe: pipeline_trigger(COMP_TRIGGER_START)
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sequence diagram calls ipc_get_comp_dev(stream_id), but the actual call in ipc_stream_trigger() is ipc_get_comp_by_id(ipc, stream.comp_id) (and ipc_get_comp_dev has a different signature). Please adjust the diagram to match the implementation or keep it function-name-agnostic.

Copilot uses AI. Check for mistakes.
The Core IPC layer is completely agnostic to the specific structure or content of the messages (whether they are IPC3 stream commands or IPC4 pipeline messages). Its primary responsibilities are:

1. **Message State Management**: Tracking if a message is being processed, queued, or completed.
2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule.
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar: “Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule” reads incorrectly ("schedule" is a noun/verb mismatch). Consider rephrasing to reference the scheduler (e.g., “thread-domain scheduler”) or “scheduling context” for clarity.

Suggested change
2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread domains schedule.
2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread-domain scheduler.

Copilot uses AI. Check for mistakes.
Comment on lines +11 to +18
The primary entry point is `ipc4_process_glb_message()`. This function determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module.

```mermaid
graph TD
Mailbox[IPC Mailbox Interrupt] --> CoreIPC[Core IPC Worker]

CoreIPC --> Valid[ipc4_get_message_request]
Valid --> Disp[ipc4_process_glb_message]
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This describes ipc4_process_glb_message() as the primary entry point, but the IPC4 handler entry point is ipc_cmd() (which then dispatches to ipc4_process_glb_message() for FW_GEN messages). Please adjust the wording so readers don’t look for a non-entry public API.

Suggested change
The primary entry point is `ipc4_process_glb_message()`. This function determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module.
```mermaid
graph TD
Mailbox[IPC Mailbox Interrupt] --> CoreIPC[Core IPC Worker]
CoreIPC --> Valid[ipc4_get_message_request]
Valid --> Disp[ipc4_process_glb_message]
IPC4 messages are received via the generic IPC handler entry point `ipc_cmd()`. For IPC4 FW_GEN (global) messages, `ipc_cmd()` dispatches to `ipc4_process_glb_message()`, which then determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module.
```mermaid
graph TD
Mailbox[IPC Mailbox Interrupt] --> CoreIPC[ipc_cmd]
CoreIPC --> TypeSel[Decode IPC Message Type]
TypeSel -->|IPC4 FW_GEN| Disp[ipc4_process_glb_message]

Copilot uses AI. Check for mistakes.
Comment on lines +95 to +96
IPC4Bind->>SrcMod: comp_set_sink(Buffer)
IPC4Bind->>SinkMod: comp_set_source(Buffer)
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the ipc4_bind sequence diagram, the steps call comp_set_sink() / comp_set_source(), but these functions don’t exist in the codebase (binding is done via buffer/graph connect + comp_bind/comp_buffer_connect flows). Please rename these calls to the actual APIs used or describe them as conceptual “bind source/sink pins” operations.

Suggested change
IPC4Bind->>SrcMod: comp_set_sink(Buffer)
IPC4Bind->>SinkMod: comp_set_source(Buffer)
IPC4Bind->>SrcMod: Bind source pin to Buff (via comp_bind/comp_buffer_connect)
IPC4Bind->>SinkMod: Bind sink pin to Buff (via comp_bind/comp_buffer_connect)

Copilot uses AI. Check for mistakes.

## Command Structure and Routing

Every message received is placed into an Rx buffer and initially routed to `ipc_cmd_handler()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems:
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ipc_cmd_handler() is referenced as the initial routing function, but there is no such symbol in IPC3; the handler entry point is ipc_cmd(). Please update the text to point at the correct entry function so the docs stay grep-able.

Suggested change
Every message received is placed into an Rx buffer and initially routed to `ipc_cmd_handler()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems:
Every message received is placed into an Rx buffer and initially routed to `ipc_cmd()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems:

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants