Compare commits

...

2 Commits

Author SHA1 Message Date
Robert McMahon e9ddcd7b2b docs: FiWiManager example path; drop hub_manager references
Use FiWiManager in remote_ssh.env.example. Remove obsolete hub_manager
mentions from design and CLI docs.

Made-with: Cursor
2026-03-30 18:18:50 -07:00
Robert McMahon a030549f61 feat(fiwi): Fi-Wi package, SSH transport, diagnostics, and docs
Replace the hubmgr layout with the fiwi package and fiwi.py entry point.
Add SshNode with deferred remote subprocess overlap, async capture helpers,
and a central diag timeline (dmesg, lspci, kernel-dump event placeholder).

Document design, CLI, and test/automation authoring under docs/.
Refresh fiber_map.example.json and remote_ssh.env.example; widen .gitignore
for Python tooling and local operator files.

Made-with: Cursor
2026-03-30 18:16:57 -07:00
30 changed files with 3502 additions and 913 deletions

46
.gitignore vendored
View File

@ -1,8 +1,44 @@
env/ # --- Python ---
__pycache__/ __pycache__/
hub_manager.py~ *.py[cod]
arc.txt *$py.class
panel_map.json *.so
.Python
build/
dist/
*.egg-info/
.eggs/
pip-wheel-metadata/
.venv/
venv/
env/
# Test / type / coverage
.pytest_cache/
.mypy_cache/
.ruff_cache/
.coverage
.coverage.*
htmlcov/
.tox/
.nox/
# --- Fi-Wi local install (next to fiwi.py) ---
# Operator maps and SSH dotenv (see remote_ssh.env.example)
fiber_map.json fiber_map.json
fiber_map.json.bak*
*.json.bak*
panel_map.json
remote_ssh.env remote_ssh.env
.hub_manager_remote remote_ssh.env~
.fiwi_remote
# --- Editor / OS ---
*~
*.swp
*.swo
.DS_Store
Thumbs.db
# --- Project-specific ---
arc.txt

View File

@ -1,22 +0,0 @@
cd ~/Code/acroname
cat > umber_env.sh << 'EOF'
# --- Umber Networks / Fi-Wi Dev Environment ---
ACRO_BASE="$HOME/Code/acroname"
if [ -d "$ACRO_BASE/env" ]; then
source "$ACRO_BASE/env/bin/activate"
fi
# Acroname Hub Management for Fi-Wi Rig
export ACRONAME_PATH="$HOME/Code/acroname/hub_manager.py"
alias hub-status='$ACRONAME_PATH status all'
alias hub-on='$ACRONAME_PATH on'
alias hub-off='$ACRONAME_PATH off'
alias hub-verify='$ACRONAME_PATH verify'
alias hub-setup='$ACRONAME_PATH setup'
alias hub-reboot='$ACRONAME_PATH reboot'
alias hub-reboot-force='$ACRONAME_PATH reboot-force'
EOF
git add umber_env.sh
git commit -m "Add Fi-Wi dev environment setup and hub aliases"
git push

74
docs/fiwi-cli.md Normal file
View File

@ -0,0 +1,74 @@
# Fi-Wi framework — command-line reference
This document lists how to invoke the Fi-Wi CLI from a shell. For architecture, see [fiwi-design.md](fiwi-design.md). For library-style automation, see [fiwi-test-authoring.md](fiwi-test-authoring.md).
## How to run
```text
python3 /path/to/fiwi.py [options] <command> [arguments…]
```
## Global options
| Option / env | Meaning |
|--------------|---------|
| `--async` (CLI) | Sets `FIWI_REMOTE_DEFER=1` before parsing the rest of argv. Deferred remote operations return handles or `asyncio.Task`s internally so work can overlap (for example during `panel calibrate`) without starting Python threads. |
| `FIWI_REMOTE_DEFER=1` | Same idea as `--async`; can be set in the environment or `remote_ssh.env`. |
| `fiwi.py --ssh user@host <command>…` | Runs that command on the remote via SSH using `FIWI_REMOTE_PYTHON` and `FIWI_REMOTE_SCRIPT` on the **remote** machine. No local BrainStem import for that invocation. |
## Environment and files (SSH / remote)
Copy `remote_ssh.env.example` to `remote_ssh.env` next to `fiwi.py` on the machine **from which you SSH** (paths inside the file refer to the **remote** host).
| Variable | Purpose |
|----------|---------|
| `FIWI_REMOTE_PYTHON` | Remote interpreter (default `python3`). |
| `FIWI_REMOTE_SCRIPT` | Remote `fiwi.py` path. |
| `FIWI_SSH_BIN` | SSH client binary (default `ssh`). |
| `FIWI_SSH_OPTS` | Extra SSH client arguments (for example `-o BatchMode=yes`). |
| `FIWI_CALIBRATE_REMOTES` | Comma-separated `user@host` list for hybrid `panel calibrate` without repeating `--ssh`. |
| `FIWI_REMOTE_DEFER` | Enable deferred remote subprocess overlap (see above). |
`fiber_map.json` may also carry `calibrate_remotes` for the same hybrid idea.
## Commands (summary)
Default command if omitted: **`status`** (with default target `all`).
| Command | Description |
|---------|-------------|
| `discover` | List hubs (serial, port count); no per-port power I/O. |
| `status [target]` | Port status; target examples: `all`, `1.3`, `all.2`. |
| `on \| off [target]` | Power downstream. |
| `reboot [target]` | Reboot downstream; skips empty ports. |
| `reboot-force [target]` | Reboot; does not skip empty ports. |
| `setup` | Udev setup helper. |
| `verify` | Verification helper. |
| `calibrate-ports-json` | Prints JSON `[[hub, port], …]` for downstream calibration ordering (used by tools and hybrid calibrate). |
| `lsusb-lines-json` | JSON lines from `lsusb` for local snapshot. |
| `wlan-info-json` | JSON wireless / NIC snapshot (sysfs, `lspci`, `iw` where relevant). |
| `power fiber-port <id> on\|off` | Power by fiber map id; SSH-forwarded if the map routes that id to another host. |
| `fiber status` | Table of fiber ports, routes, power, saved previews. |
| `fiber chip <fiber_port_id> [save]` | Local `lsusb` probe; **not** used for SSH-mapped PCIe/fiber paths (see stderr message if misused). |
| `panel status` | Patch panel positions `1…N` (`N` from `patch_panel.slots`, default 24). |
| `panel on\|off <n>` | Power panel position `n`. |
| `panel reboot <n>` | Reboot with `skip_empty=True`. |
| `panel reboot-force <n>` | Reboot without skipping empty. |
| `panel calibrate [merge] [<N>] [--ssh user@host]…` | Interactive calibration: set panel size, walk hubs, update `fiber_map.json`. Optional **`merge`**, limit **`N`**, and one or more **`--ssh user@host`** for remote legs. Order: local downstream ports, then each remotes ports. |
| `help` | Longer built-in help text (also `-h` / `--help`). |
## Fiber map routing (CLI behavior)
Entries under `fiber_ports` may set routing to another machine, for example:
- `"ssh": "user@host"`, or
- `"remote": "…"`, or
- `"host"` + `"user"`.
On the SSH **destination**, the same fiber id should be **local** (no `ssh` field) so commands are not forwarded again.
Optional **`pcie`** objects hold switch / SFP / bus metadata; calibrate can fill these via numeric prompts.
## Exit codes
The CLI generally returns **0** on success and **non-zero** on usage errors or tool failures. **`fiber chip`** on an SSH-mapped port returns **2** with an explanatory stderr message (by design).

49
docs/fiwi-design.md Normal file
View File

@ -0,0 +1,49 @@
# Fi-Wi framework — design
This document is for engineers changing or extending the Fi-Wi code. For command syntax, see [fiwi-cli.md](fiwi-cli.md). For automation and “test case” style usage, see [fiwi-test-authoring.md](fiwi-test-authoring.md).
## Purpose
The Fi-Wi stack ties together **Acroname / BrainStem USB hubs**, a **patch panel** model, and a **fiber map** (`fiber_map.json`) so operators can discover hubs, power downstream ports, walk calibration, and record per-fiber metadata (SSH routing, wireless, optional PCIe hints). Commands may run **locally** (Python loads BrainStem) or **on a remote host** over SSH when the map or CLI says the hardware lives elsewhere.
## Repository layout
| Piece | Role |
|--------|------|
| `fiwi.py` | Entry point; configures `fiwi.paths` with the install directory, then runs `fiwi.cli.main`. |
| `fiwi/` | Library: harness, SSH transport, fiber map I/O, USB/wireless probes, diagnostics. |
| `fiber_map.json` | Lives next to `fiwi.py` (see `fiwi.paths`). Operator-edited or calibrate-produced. |
| `remote_ssh.env` / `.fiwi_remote` | Optional dotenv next to the install; merged into process env for SSH defaults (see `SshNodeConfig`). |
## Core components
- **`FiWiHarness`** (`fiwi/harness.py`): Orchestrates BrainStem discovery, power, reboot, panel + fiber workflows, and **panel calibrate** (interactive and hybrid local/remote ordering).
- **`FiberRadioPort`** (`fiwi/fiber_radio_port.py`): View of one `fiber_ports[]` entry: power routing, SSH node resolution, previews for map-driven UIs.
- **`SshNode` / `SshNodeConfig`** (`fiwi/ssh_node.py`): Builds `ssh` argv (optional `FIWI_SSH_OPTS`), runs remote `FIWI_REMOTE_PYTHON` + `FIWI_REMOTE_SCRIPT` with forwarded subcommands, or **raw** `ssh target …` for helpers like `dmesg` / `lspci` in diagnostics. Supports **deferred** capture (`RemoteCallHandle`, `asyncio.Task`) when `defer=True` or `FIWI_REMOTE_DEFER` / `--async` so overlapping subprocesses do not require Python threads.
- **`ssh_dispatch`** (`fiwi/ssh_dispatch.py`): Early CLI path: if `fiber_map.json` routes a **power fiber-port** (or certain fiber flows) to another host, dispatch runs `SshNode.invoke` **without** importing BrainStem on the workstation.
- **`fiber_map_io`**: Load/save map, PCIe prompt helpers during calibrate, previews.
- **`patch_panel`**: Effective slot count from map defaults.
- **`ieee80211_dev` / `usb_probe`**: Machine-readable snapshots used by calibrate (including over SSH via `wlan-info-json`, `lsusb-lines-json`).
## Data flow
1. **Local BrainStem path**: `fiwi.cli.main` loads BrainStem, constructs `FiWiHarness`, runs subcommand.
2. **Fiber-mapped SSH path**: Before BrainStem load, `dispatch_fiber_mapped_ssh_if_needed` may forward specific argv patterns based on the map.
3. **Explicit `--ssh user@host`**: Runs `SshNode.invoke` with TTY-friendly SSH when needed; no local BrainStem required for that invocation.
4. **Hybrid calibrate**: Local hub/port list first, then each remote hosts list (from `--ssh`, `calibrate_remotes` in JSON, or `FIWI_CALIBRATE_REMOTES`). Remote enumeration uses `calibrate-ports-json` or `discover` fallback on the remote.
## Diagnostic timeline (`fiwi/diag_log.py`)
A **central, append-only log** records ordered events (`NoteEvent`, `DmesgEvent`, `PcieEvent`, `KernelDumpEvent`) with wall-clock timestamps. JSONL export is a straight `dataclasses.asdict` per line. **Kernel crash dump collection is not implemented**; `KernelDumpEvent` and `alog_kernel_dump` / `kernel_dump_event` exist so future kdump/vmcore steps append to the same timeline without changing `DiagLog` storage.
## Extension points
- New **CLI subcommands**: `fiwi/cli.py` dispatch; heavy logic stays in `FiWiHarness` or focused modules.
- New **map-driven behavior**: extend `fiber_ports` schema carefully; teach `FiberRadioPort` / `ssh_dispatch` if routing changes.
- New **remote machine-readable commands**: add branches in `fiwi/cli.py` (remote runs the same script).
- New **diagnostic event kinds**: add a frozen dataclass, widen `DiagEvent`, keep `dump_jsonl` as dict-per-line.
## Dependencies and constraints
- BrainStem Python API is required for local hub control paths.
- Remote hosts need a working `fiwi.py` (or configured script path), udev for hubs where applicable, and consistent `fiber_map` semantics so SSH forwarding does not loop.

View File

@ -0,0 +1,78 @@
# Fi-Wi framework — authoring test cases and automation
This document is for people who write **repeatable checks**, **calibration procedures**, or **scripts** around Fi-Wi—not for end users who only run `panel calibrate` interactively. For shell commands, see [fiwi-cli.md](fiwi-cli.md). For code structure, see [fiwi-design.md](fiwi-design.md).
The repository does not yet ship a pytest suite for `fiwi/`; the patterns below apply whether you drive **`subprocess`** against `fiwi.py` or import **`fiwi`** as a library.
## Choose a control style
1. **Shell / subprocess** — Easiest to run exactly what operators run. Parse JSON from stdout for machine-readable commands (`calibrate-ports-json`, `lsusb-lines-json`, `wlan-info-json`). Stable for CI-style wrappers.
2. **Python imports** — Use `FiWiHarness`, `SshNode`, `FiberRadioPort`, and `fiwi.diag_log` for tighter integration, error handling, and async (`ainvoke_capture`, `alog_dmesg`, and so on).
Mix both: for example import `SshNode` to run remote JSON probes while keeping local steps in the shell.
## Stable machine-readable commands
These are intended for automation (same argv on local or remote via `--ssh`):
| Command | Typical use |
|---------|-------------|
| `calibrate-ports-json` | Ordered `[[hub, port], …]` for calibration or power walks. |
| `lsusb-lines-json` | USB topology snapshot on the host where the command runs. |
| `wlan-info-json` | Wireless NIC metadata for map fill-in over SSH. |
| `status` | Human tables; less ideal for parsing than JSON helpers. |
When testing **remote** behavior, run the same subcommand with `fiwi.py --ssh user@host …` so the environment matches production.
## Fiber map as test fixture
- Keep a **golden** `fiber_map.json` (or merge from `fiber_map.example.json`) under version control for non-interactive tests.
- For SSH-mapped fibers, ensure the map on the **workstation** points at the DUT, and the map on the **DUT** marks those ports as local to avoid forwarding loops.
- Use **`power fiber-port <id> on|off`** in scripts when the goal is “same as operator” routing through the map.
## Deferred remote work (overlap)
If your automation fires **many SSH captures** in parallel (similar to `panel calibrate`):
- Set **`FIWI_REMOTE_DEFER=1`** or pass **`--async`** so deferred calls return **`RemoteCallHandle`** or **`asyncio.Task`** instead of blocking immediately inside each call.
- Join with **`.result()`** on handles or **`await`** on tasks before asserting outcomes.
Avoid assuming synchronous blocking remote calls if defer is enabled.
## Async library example (sketch)
```python
import asyncio
from fiwi import SshNode, alog_hardware_snapshot, get_diag_log
async def remote_baseline(host: str):
log = get_diag_log()
ssh = SshNode.parse(host)
await alog_hardware_snapshot(log, ssh=ssh, caption=f"baseline {host}")
code, out, err = await ssh.ainvoke_capture(["wlan-info-json"], timeout=60)
assert code == 0, (code, err)
# parse JSON from `out`
asyncio.run(remote_baseline("pi@192.168.1.50"))
```
Use **`ainvoke_capture`** / **`araw_ssh`** when you are already inside an async test harness; use sync **`invoke_capture`** / **`raw_ssh`** for straight-line scripts.
## Diagnostic log in tests
- **`get_diag_log()`** returns a process-wide **`DiagLog`**; call **`log.clear()`** at the start of a test case if you need isolation without subprocess boundaries.
- **`alog_hardware_snapshot`** appends note + **dmesg** + **lspci** in a fixed order (captures may run concurrently, but log order is stable).
- **`KernelDumpEvent`** is reserved for future kdump/vmcore steps; today you can append **`kernel_dump_event(...)`** manually to exercise JSONL export.
**`dump_jsonl(path)`** writes one JSON object per line suitable for attaching to bug reports.
## Good practices
- **Timeouts**: Always pass explicit timeouts on remote calls that your test framework allows to fail slowly.
- **Idempotence**: Prefer power-off at the end of destructive sequences; document required starting hub state.
- **Secrets**: Do not commit `remote_ssh.env` with passwords; prefer keys and `FIWI_SSH_OPTS`.
- **Assertions**: When checking JSON from `invoke_capture`, assert **exit code** and parse errors separately so failures show stderr from the remote `fiwi.py`.
## Where to add real unit tests later
A future **`tests/`** package can import **`fiwi`** modules with BrainStem mocked or skipped, and subprocess tests can point `fiwi.paths.configure` at a temporary directory with a throwaway `fiber_map.json`. This document stays valid: same JSON commands and public API surface.

View File

@ -1,4 +1,5 @@
{ {
"patch_panel": { "slots": 24, "label": "Example rack" },
"calibrate_remotes": ["pi@192.168.1.39"], "calibrate_remotes": ["pi@192.168.1.39"],
"fiber_ports": { "fiber_ports": {
"1": { "1": {
@ -21,7 +22,40 @@
}, },
"2": { "hub": 1, "port": 1 }, "2": { "hub": 1, "port": 1 },
"5": { "hub": 1, "port": 4 }, "5": { "hub": 1, "port": 4 },
"6": { "hub": 1, "port": 5, "ssh": "pi@192.168.1.39" }, "6": {
"hub": 1,
"port": 5,
"ssh": "pi@192.168.1.39",
"chip_type": "Network controller [0280]: Intel Corporation Wi-Fi 6 AX210 …",
"radio_interface": "wlan0",
"wlan": {
"wlan_scanned_at": "2026-03-27T12:00:00Z",
"interfaces": {
"wlan0": {
"interface": "wlan0",
"driver": "iwlwifi",
"pci_address": "0000:01:00.0",
"connection_type": "PCIe",
"vendor_id": "8086",
"device_id": "2725",
"chip_label": "Network controller [0280]: Intel Corporation Wi-Fi 6 AX210 …",
"bands_ghz": [2.4, 5, 6],
"wiphy": "phy0"
}
},
"primary": {
"interface": "wlan0",
"driver": "iwlwifi",
"pci_address": "0000:01:00.0",
"connection_type": "PCIe",
"vendor_id": "8086",
"device_id": "2725",
"chip_label": "Network controller [0280]: Intel Corporation Wi-Fi 6 AX210 …",
"bands_ghz": [2.4, 5, 6],
"wiphy": "phy0"
}
}
},
"7": { "hub": 2, "port": 0, "host": "192.168.1.39", "user": "pi" } "7": { "hub": 2, "port": 0, "host": "192.168.1.39", "user": "pi" }
} }
} }

View File

@ -1,13 +1,13 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Thin entry: JSON maps and remote_ssh.env resolve to this files directory.""" """Fi-Wi test framework CLI — maps and SSH env files resolve to this files directory."""
import os import os
import hubmgr.paths as _paths import fiwi.paths as _paths
_paths.configure(os.path.dirname(os.path.abspath(__file__))) _paths.configure(os.path.dirname(os.path.abspath(__file__)))
from hubmgr.cli import main from fiwi.cli import main
if __name__ == "__main__": if __name__ == "__main__":
raise SystemExit(main()) raise SystemExit(main())

61
fiwi/__init__.py Normal file
View File

@ -0,0 +1,61 @@
"""Fi-Wi test framework — public types for library use."""
from fiwi.diag_log import (
DiagEvent,
DiagLog,
DmesgEvent,
KernelDumpEvent,
NoteEvent,
PcieEvent,
acapture_dmesg,
acapture_pcie,
alog_dmesg,
alog_hardware_snapshot,
alog_kernel_dump,
alog_note,
alog_pcie,
get_diag_log,
kernel_dump_event,
note_event,
)
from fiwi.fiber_radio_port import FiberRadioPort
from fiwi.harness import FiWiHarness
from fiwi.patch_panel import PatchPanel
from fiwi.ssh_node import (
FetchCalibratePortsHandle,
RemoteCallHandle,
RemoteInvokeHandle,
SshNode,
SshNodeConfig,
apply_fiwi_ssh_env,
resolve_remote_defer,
)
__all__ = [
"DiagEvent",
"DiagLog",
"DmesgEvent",
"KernelDumpEvent",
"FiberRadioPort",
"FetchCalibratePortsHandle",
"FiWiHarness",
"NoteEvent",
"PatchPanel",
"PcieEvent",
"RemoteCallHandle",
"RemoteInvokeHandle",
"SshNode",
"SshNodeConfig",
"acapture_dmesg",
"acapture_pcie",
"alog_dmesg",
"alog_hardware_snapshot",
"alog_kernel_dump",
"alog_note",
"alog_pcie",
"apply_fiwi_ssh_env",
"get_diag_log",
"kernel_dump_event",
"note_event",
"resolve_remote_defer",
]

274
fiwi/cli.py Normal file
View File

@ -0,0 +1,274 @@
"""Fi-Wi framework CLI: argv dispatch, --ssh, fiber-map forwarding."""
import json
import os
import sys
from fiwi.harness import FiWiHarness
from fiwi.brainstem_loader import load_brainstem
from fiwi.ssh_dispatch import dispatch_fiber_mapped_ssh_if_needed
from fiwi.ssh_node import SshNode
from fiwi import usb_probe as usb
def _strip_async_from_argv(argv: list[str]) -> list[str]:
"""``--async`` → ``FIWI_REMOTE_DEFER=1`` (deferred remote calls return handles / Tasks)."""
out = []
for a in argv:
if a == "--async":
os.environ["FIWI_REMOTE_DEFER"] = "1"
continue
out.append(a)
return out
def _parse_panel_calibrate_argv(args):
"""
``panel calibrate [merge] [N] [--ssh user@host] ``
Returns ``(merge, limit, calibrate_ssh_hosts)``.
"""
merge = False
limit = None
hosts = []
i = 0
while i < len(args):
a = args[i]
low = a.lower()
if low == "merge":
merge = True
i += 1
continue
if low == "--ssh":
if i + 1 >= len(args):
print("panel calibrate: --ssh requires user@host", file=sys.stderr, flush=True)
sys.exit(2)
hosts.append(args[i + 1].strip())
i += 2
continue
if a.isdigit():
limit = int(a)
i += 1
continue
print(f"panel calibrate: unknown argument {a!r}", file=sys.stderr, flush=True)
sys.exit(2)
return merge, limit, hosts
def main() -> int:
sys.argv[:] = [sys.argv[0]] + _strip_async_from_argv(sys.argv[1:])
argv = sys.argv[1:]
if len(argv) >= 2 and argv[0] in ("--ssh", "--remote"):
remote_host = argv[1]
rest = argv[2:]
if not rest:
print(
"Usage: fiwi.py --ssh user@host <command> [args...]\n"
" Example: fiwi.py --ssh pi@192.168.1.39 discover\n"
" If brainstem is in a Pi venv: copy remote_ssh.env.example → remote_ssh.env next to\n"
" this script on the PC where you run --ssh (paths in the file are on the Pi).\n"
" Or export FIWI_REMOTE_PYTHON / FIWI_REMOTE_SCRIPT.\n"
" On the Pi: pip install -r requirements.txt in that venv; udev 24ff.",
file=sys.stderr,
flush=True,
)
return 2
return SshNode.parse(remote_host).invoke(rest, defer=False)
rc_ssh_map = dispatch_fiber_mapped_ssh_if_needed(argv)
if rc_ssh_map is not None:
return rc_ssh_map
os.write(2, b"fiwi: start\n")
try:
load_brainstem()
except Exception as exc:
print(f"fiwi: failed to import brainstem: {exc}", file=sys.stderr, flush=True)
if isinstance(exc, ImportError):
print(
" If this text came from `fiwi.py --ssh …`: the remote used system python3 by default.\n"
" On your PC export FIWI_REMOTE_PYTHON to the Pi venvs python and\n"
" FIWI_REMOTE_SCRIPT to that fiwi.py (absolute paths on the Pi).",
file=sys.stderr,
flush=True,
)
return 1
harness = FiWiHarness()
try:
cmd = sys.argv[1].lower() if len(sys.argv) > 1 else "status"
target = sys.argv[2] if len(sys.argv) > 2 else "all"
if cmd == "status":
harness.status(target)
elif cmd == "calibrate-ports-json":
if not harness.hubs and not harness.connect():
print("[]", flush=True)
else:
pairs = harness._ordered_downstream_ports()
print(json.dumps([[h, p] for h, p in pairs]), flush=True)
elif cmd == "lsusb-lines-json":
print(json.dumps(usb.lsusb_lines()), flush=True)
elif cmd == "wlan-info-json":
from fiwi.ieee80211_dev import discover_wireless_for_map
print(json.dumps(discover_wireless_for_map()), flush=True)
elif cmd == "discover":
harness.discover()
elif cmd == "power":
if len(sys.argv) < 5 or sys.argv[2].lower() != "fiber-port":
print(
"Usage: fiwi.py power fiber-port <fiber_port_id> on|off\n"
" Uses fiber_map.json; per-entry ssh / host+user forwards to that host (see help).",
file=sys.stderr,
flush=True,
)
return 2
try:
fp_n = int(sys.argv[3])
except ValueError:
print("fiber_port_id must be an integer.", file=sys.stderr, flush=True)
return 2
mode = sys.argv[4].lower()
if mode not in ("on", "off"):
print("Last argument must be on or off.", file=sys.stderr, flush=True)
return 2
harness.fiber_power(mode, fp_n)
elif cmd == "fiber":
if len(sys.argv) < 3:
print(
"Usage: fiwi.py fiber status\n"
" fiwi.py fiber chip <fiber_port_id> [save]\n"
" status — hub.port, Route, power, and saved previews from fiber_map.json\n"
" chip — local lsusb diff only (not used for SSH-mapped / PCIe-fiber paths)",
file=sys.stderr,
flush=True,
)
return 2
sub = sys.argv[2].lower()
if sub == "status":
harness.fiber_map_status()
elif sub == "chip":
if len(sys.argv) < 4:
print(
"Usage: fiwi.py fiber chip <fiber_port_id> [save]",
file=sys.stderr,
flush=True,
)
return 2
try:
chip_fp = int(sys.argv[3])
except ValueError:
print("fiber_port_id must be an integer.", file=sys.stderr, flush=True)
return 2
save_chip = len(sys.argv) >= 5 and sys.argv[4].lower() == "save"
harness.fiber_chip(chip_fp, save=save_chip)
else:
print(f"Unknown fiber subcommand: {sub!r}", file=sys.stderr, flush=True)
return 2
elif cmd == "panel":
if len(sys.argv) < 3:
print(
"Usage: fiwi.py panel status\n"
" fiwi.py panel on|off <panel_port>\n"
" fiwi.py panel reboot|reboot-force <panel_port>\n"
" fiwi.py panel calibrate [merge] [<N>] [--ssh user@host] …\n"
" calibrate: local hub ports first, then each --ssh host, calibrate_remotes in JSON, and/or\n"
" FIWI_CALIBRATE_REMOTES in remote_ssh.env (comma-separated) for one-command hybrid.\n"
" merge / N as before; remote steps set \"ssh\" on new fiber_ports entries.\n"
" Calibrate starts by setting patch_panel.slots (front-panel positions); panel <n> is 1…slots.\n"
" Use power fiber-port for arbitrary fiber ids beyond the panel if needed.\n"
" Preset: fiber_map.rpi20.json → fiber_map.json for 8+8+4 → fiber ports 120.",
file=sys.stderr,
flush=True,
)
return 2
sub = sys.argv[2].lower()
if sub == "status":
harness.panel_status()
elif sub == "calibrate":
args = sys.argv[3:]
merge, limit, cal_hosts = _parse_panel_calibrate_argv(args)
harness.panel_calibrate(merge=merge, limit=limit, calibrate_ssh_hosts=cal_hosts)
elif sub in ("on", "off"):
if len(sys.argv) < 4:
print(
f"Usage: fiwi.py panel {sub} <1-N> (N = patch_panel.slots in fiber_map, default 24)",
file=sys.stderr,
flush=True,
)
return 2
harness.panel_power(sub, int(sys.argv[3]))
elif sub == "reboot":
if len(sys.argv) < 4:
print(
"Usage: fiwi.py panel reboot <1-N> (N from fiber_map patch_panel.slots)",
file=sys.stderr,
flush=True,
)
return 2
harness.panel_reboot(int(sys.argv[3]), skip_empty=True)
elif sub == "reboot-force":
if len(sys.argv) < 4:
print(
"Usage: fiwi.py panel reboot-force <1-N> (N from fiber_map patch_panel.slots)",
file=sys.stderr,
flush=True,
)
return 2
harness.panel_reboot(int(sys.argv[3]), skip_empty=False)
else:
print(f"Unknown panel subcommand: {sub!r}", file=sys.stderr, flush=True)
return 2
elif cmd in ("on", "off"):
if not harness.power(cmd, target):
return 1
elif cmd in ("reboot", "reboot-force"):
harness.reboot(target, skip_empty=(cmd == "reboot"))
elif cmd == "setup":
harness.setup_udev()
elif cmd == "verify":
harness.verify()
elif cmd in ("help", "-h", "--help"):
print(
"Fi-Wi test framework — CLI\n"
"Usage: fiwi.py [--async] <command> [target]\n"
" --async set FIWI_REMOTE_DEFER: deferred calls spawn ssh child processes immediately;\n"
" panel calibrate overlaps them (join via handle.result(); no Python threads).\n"
" Or set FIWI_REMOTE_DEFER=1 / remote_ssh.env (see remote_ssh.env.example).\n"
" discover — list hubs (serial, port count); no port I/O\n"
" status [target] — default command; target like all, 1.3, all.2\n"
" fiber status — fiber_ports + power (local or per-entry ssh / host+user)\n"
" fiber chip <id> [save] — local lsusb probe (SSH-mapped ports: wlan/pcie from calibrate, not fiber chip)\n"
" wlan-info-json — machine-readable wireless NIC snapshot (sysfs + lspci/iw); used by panel calibrate over SSH\n"
" power fiber-port <id> on|off — power by fiber key (ssh forward if map says so)\n"
" panel status — rack positions 1…N (N = patch_panel.slots, default 24)\n"
" panel calibrate … — set patch panel size first, then USB hub walk → fiber_map.json\n"
" panel on|off|reboot|reboot-force <n>\n"
" on|off [target] reboot|reboot-force [target] setup verify\n"
"\n"
"Remote (hubs on another host — no local brainstem needed):\n"
" fiwi.py --ssh user@host discover\n"
" remote_ssh.env next to fiwi.py (see remote_ssh.env.example) or env vars:\n"
" FIWI_REMOTE_PYTHON remote interpreter (default python3)\n"
" FIWI_REMOTE_SCRIPT remote script path (default /usr/local/bin/fiwi.py)\n"
" FIWI_SSH_OPTS e.g. '-o BatchMode=yes'\n"
" FIWI_CALIBRATE_REMOTES optional comma-separated user@host for panel calibrate (no --ssh needed)\n"
" Pi: venv + requirements.txt; udev 24ff.\n"
"\n"
"fiber_map.json fiber_ports entries may set ssh routing (hubs on another machine):\n"
' "ssh": "user@host" or "remote": "" or "host": "ip", "user": "pi"\n'
" On the SSH destination, the same fiber id should be local (omit ssh) so commands are not re-forwarded.\n"
' Optional "pcie": { bus, switch, slot, adapter_port, sfp_serial, board_serial, … } — calibrate can fill via 16+SFP.\n'
"\n"
"Hybrid calibrate: put {\"calibrate_remotes\": [\"pi@ip\"]} in fiber_map.json or pass --ssh per host;\n"
" order is all local downstream ports, then each remotes ports (see calibrate-ports-json on the Pi)."
)
else:
print(f"Unknown command: {cmd!r}", file=sys.stderr, flush=True)
print(
"Try: --ssh user@host … | discover | calibrate-ports-json | wlan-info-json | status | fiber | power | panel | … | help",
file=sys.stderr,
flush=True,
)
return 2
finally:
harness.disconnect()
return 0

437
fiwi/diag_log.py Normal file
View File

@ -0,0 +1,437 @@
"""
Central diagnostic timeline: append-only, wall-clock ordered events for post-mortem debugging.
Async helpers snapshot **dmesg** and **lspci** (local or over SSH). Use :func:`get_diag_log` for a
process-wide buffer, or construct :class:`DiagLog` for tests / isolated sessions.
Extension kernel debugging / crash dumps (collectors TBD)
------------------------------------------------------------
Planned **kdump** / **vmcore** / **pstore** / netconsole workflows will append :class:`KernelDumpEvent`
rows (host, phase, human summary, artifact paths, optional tool output, scalar metadata pairs). That
keeps crash artifacts in the same JSONL timeline as :class:`DmesgEvent` and :class:`PcieEvent`.
Future async helpers (e.g. ``acapture_kernel_dump`` / SSH pull of ``/var/crash``) should build
:class:`KernelDumpEvent` and :meth:`DiagLog.append` them no change to :class:`DiagLog` storage.
"""
from __future__ import annotations
import asyncio
import json
import threading
import time
from dataclasses import asdict, dataclass
from pathlib import Path
from typing import Dict, List, Literal, Optional, Sequence, Tuple, Union
from fiwi.ssh_node import SshNode
@dataclass(frozen=True)
class NoteEvent:
"""Operator or code annotation on the diagnostic timeline."""
kind: Literal["note"]
ts_wall: float
message: str
@dataclass(frozen=True)
class DmesgEvent:
"""Kernel ring buffer snapshot (``dmesg``), for link resets, PCIe AER, USB, etc."""
kind: Literal["dmesg"]
ts_wall: float
host_label: str
exit_code: int
stdout: str
stderr: str
argv_used: Tuple[str, ...]
@dataclass(frozen=True)
class PcieEvent:
"""PCI device listing (``lspci``), for topology and driver binding issues."""
kind: Literal["pcie"]
ts_wall: float
host_label: str
exit_code: int
stdout: str
stderr: str
@dataclass(frozen=True)
class KernelDumpEvent:
"""
Kernel crash-dump lifecycle (kdump/vmcore, pstore, copy steps, ``crash`` session notes).
Populated by future collectors; use :func:`kernel_dump_event` to append manually today.
``phase`` examples: ``"kdump_enabled"``, ``"vmcore_ready"``, ``"artifact_copied"``, ``"pstore"``.
"""
kind: Literal["kernel_dump"]
ts_wall: float
host_label: str
phase: str
summary: str
artifact_paths: Tuple[str, ...] = ()
capture_log: str = ""
metadata: Tuple[Tuple[str, str], ...] = ()
DiagEvent = Union[NoteEvent, DmesgEvent, PcieEvent, KernelDumpEvent]
class DiagLog:
"""
Ordered event list (wall-clock ``ts_wall``). Safe to append from sync code or asyncio tasks.
New diagnostic categories (e.g. :class:`KernelDumpEvent`) are added as union members of
:obj:`DiagEvent`; :meth:`dump_jsonl` stays ``dataclasses.asdict`` per row.
"""
__slots__ = ("_events", "_lock")
def __init__(self) -> None:
self._events: List[DiagEvent] = []
self._lock = threading.Lock()
def append(self, event: DiagEvent) -> None:
with self._lock:
self._events.append(event)
def extend(self, events: Sequence[DiagEvent]) -> None:
with self._lock:
self._events.extend(events)
def clear(self) -> None:
with self._lock:
self._events.clear()
def snapshot(self) -> List[DiagEvent]:
with self._lock:
return list(self._events)
def dump_jsonl(self, path: Union[str, Path], *, append: bool = False) -> None:
"""Write one JSON object per line (UTF-8)."""
p = Path(path)
mode = "a" if append else "w"
lines = [json.dumps(asdict(e), ensure_ascii=False) + "\n" for e in self.snapshot()]
p.parent.mkdir(parents=True, exist_ok=True)
with p.open(mode, encoding="utf-8") as f:
f.writelines(lines)
_diag_singleton: Optional[DiagLog] = None
_diag_singleton_lock = threading.Lock()
def get_diag_log() -> DiagLog:
"""Shared :class:`DiagLog` for the process (lazy)."""
global _diag_singleton
with _diag_singleton_lock:
if _diag_singleton is None:
_diag_singleton = DiagLog()
return _diag_singleton
def note_event(message: str, *, ts_wall: Optional[float] = None) -> NoteEvent:
return NoteEvent(
kind="note",
ts_wall=time.time() if ts_wall is None else ts_wall,
message=message,
)
def kernel_dump_event(
*,
host_label: str,
phase: str,
summary: str,
artifact_paths: Sequence[str] = (),
capture_log: str = "",
metadata: Optional[Dict[str, str]] = None,
ts_wall: Optional[float] = None,
) -> KernelDumpEvent:
"""Build a :class:`KernelDumpEvent` (for manual logging until async collectors exist)."""
pairs: Tuple[Tuple[str, str], ...] = tuple(
sorted((metadata or {}).items(), key=lambda kv: kv[0])
)
return KernelDumpEvent(
kind="kernel_dump",
ts_wall=time.time() if ts_wall is None else ts_wall,
host_label=host_label,
phase=phase,
summary=summary,
artifact_paths=tuple(artifact_paths),
capture_log=capture_log,
metadata=pairs,
)
async def _async_run_capture(
argv: Sequence[str], *, timeout: float
) -> Tuple[int, str, str]:
try:
proc = await asyncio.create_subprocess_exec(
*argv,
stdin=asyncio.subprocess.DEVNULL,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
except OSError as e:
return 1, "", str(e)
try:
out_b, err_b = await asyncio.wait_for(proc.communicate(), timeout=timeout)
except asyncio.TimeoutExpired:
proc.kill()
try:
await proc.wait()
except OSError:
pass
return 124, "", "timeout"
code = proc.returncode if proc.returncode is not None else 1
return (
code,
(out_b or b"").decode(errors="replace"),
(err_b or b"").decode(errors="replace"),
)
async def acapture_dmesg_local(
*, host_label: str = "local", timeout: float = 45.0
) -> DmesgEvent:
ts = time.time()
for args in (("dmesg", "-T"), ("dmesg",)):
code, out, err = await _async_run_capture(args, timeout=timeout)
if code == 0 or args == ("dmesg",):
return DmesgEvent(
kind="dmesg",
ts_wall=ts,
host_label=host_label,
exit_code=code,
stdout=out,
stderr=err,
argv_used=tuple(args),
)
raise RuntimeError("unreachable dmesg capture loop")
async def acapture_pcie_local(
*,
host_label: str = "local",
timeout: float = 60.0,
include_tree: bool = True,
) -> PcieEvent:
ts = time.time()
parts: List[str] = []
errs: List[str] = []
worst = 0
code, out, err = await _async_run_capture(("lspci", "-nn"), timeout=timeout)
worst = max(worst, code)
parts.append("=== lspci -nn ===\n" + out)
if err.strip():
errs.append(err)
if include_tree:
code2, out2, err2 = await _async_run_capture(("lspci", "-tv"), timeout=timeout)
worst = max(worst, code2)
parts.append("\n=== lspci -tv ===\n" + out2)
if err2.strip():
errs.append(err2)
return PcieEvent(
kind="pcie",
ts_wall=ts,
host_label=host_label,
exit_code=worst,
stdout="".join(parts),
stderr="\n".join(errs),
)
async def acapture_dmesg_ssh(
ssh: SshNode,
*,
timeout: float = 45.0,
) -> DmesgEvent:
ts = time.time()
label = ssh.target
for args in (["dmesg", "-T"], ["dmesg"]):
code, out, err = await ssh.araw_ssh(args, timeout=timeout, defer=False)
if code == 0 or args == ["dmesg"]:
return DmesgEvent(
kind="dmesg",
ts_wall=ts,
host_label=label,
exit_code=code,
stdout=out,
stderr=err,
argv_used=tuple(args),
)
raise RuntimeError("unreachable dmesg SSH capture loop")
async def acapture_pcie_ssh(
ssh: SshNode,
*,
timeout: float = 60.0,
include_tree: bool = True,
) -> PcieEvent:
ts = time.time()
label = ssh.target
parts: List[str] = []
errs: List[str] = []
worst = 0
code, out, err = await ssh.araw_ssh(["lspci", "-nn"], timeout=timeout, defer=False)
worst = max(worst, code)
parts.append("=== lspci -nn ===\n" + out)
if err.strip():
errs.append(err)
if include_tree:
code2, out2, err2 = await ssh.araw_ssh(["lspci", "-tv"], timeout=timeout, defer=False)
worst = max(worst, code2)
parts.append("\n=== lspci -tv ===\n" + out2)
if err2.strip():
errs.append(err2)
return PcieEvent(
kind="pcie",
ts_wall=ts,
host_label=label,
exit_code=worst,
stdout="".join(parts),
stderr="\n".join(errs),
)
async def acapture_dmesg(
*,
ssh: Optional[SshNode] = None,
host_label: Optional[str] = None,
timeout: float = 45.0,
) -> DmesgEvent:
"""Local ``dmesg`` when ``ssh`` is ``None``, else remote via :class:`~fiwi.ssh_node.SshNode`."""
if ssh is None:
return await acapture_dmesg_local(
host_label=host_label or "local", timeout=timeout
)
ev = await acapture_dmesg_ssh(ssh, timeout=timeout)
if host_label is not None:
return DmesgEvent(
kind="dmesg",
ts_wall=ev.ts_wall,
host_label=host_label,
exit_code=ev.exit_code,
stdout=ev.stdout,
stderr=ev.stderr,
argv_used=ev.argv_used,
)
return ev
async def acapture_pcie(
*,
ssh: Optional[SshNode] = None,
host_label: Optional[str] = None,
timeout: float = 60.0,
include_tree: bool = True,
) -> PcieEvent:
if ssh is None:
return await acapture_pcie_local(
host_label=host_label or "local",
timeout=timeout,
include_tree=include_tree,
)
ev = await acapture_pcie_ssh(ssh, timeout=timeout, include_tree=include_tree)
if host_label is not None:
return PcieEvent(
kind="pcie",
ts_wall=ev.ts_wall,
host_label=host_label,
exit_code=ev.exit_code,
stdout=ev.stdout,
stderr=ev.stderr,
)
return ev
async def alog_note(log: DiagLog, message: str) -> NoteEvent:
ev = note_event(message)
log.append(ev)
return ev
async def alog_dmesg(
log: DiagLog,
*,
ssh: Optional[SshNode] = None,
host_label: Optional[str] = None,
timeout: float = 45.0,
) -> DmesgEvent:
ev = await acapture_dmesg(ssh=ssh, host_label=host_label, timeout=timeout)
log.append(ev)
return ev
async def alog_pcie(
log: DiagLog,
*,
ssh: Optional[SshNode] = None,
host_label: Optional[str] = None,
timeout: float = 60.0,
include_tree: bool = True,
) -> PcieEvent:
ev = await acapture_pcie(
ssh=ssh, host_label=host_label, timeout=timeout, include_tree=include_tree
)
log.append(ev)
return ev
async def alog_kernel_dump(
log: DiagLog,
*,
host_label: str,
phase: str,
summary: str,
artifact_paths: Sequence[str] = (),
capture_log: str = "",
metadata: Optional[Dict[str, str]] = None,
) -> KernelDumpEvent:
"""Append a :class:`KernelDumpEvent` (for async collectors once kdump/vmcore pull is implemented)."""
ev = kernel_dump_event(
host_label=host_label,
phase=phase,
summary=summary,
artifact_paths=artifact_paths,
capture_log=capture_log,
metadata=metadata,
)
log.append(ev)
return ev
async def alog_hardware_snapshot(
log: DiagLog,
*,
ssh: Optional[SshNode] = None,
caption: str = "dmesg + lspci",
timeout_dmesg: float = 45.0,
timeout_pcie: float = 60.0,
include_tree: bool = True,
) -> Tuple[NoteEvent, DmesgEvent, PcieEvent]:
"""
One note plus concurrent dmesg and lspci captures (same host). Handy after a failure or
between calibrate steps. Events are appended in a stable order: note, then dmesg, then PCIe.
"""
ev_note = note_event(caption)
log.append(ev_note)
dmesg_ev, pcie_ev = await asyncio.gather(
acapture_dmesg(ssh=ssh, timeout=timeout_dmesg),
acapture_pcie(
ssh=ssh,
timeout=timeout_pcie,
include_tree=include_tree,
),
)
log.append(dmesg_ev)
log.append(pcie_ev)
return ev_note, dmesg_ev, pcie_ev

View File

@ -1,4 +1,4 @@
"""Load/save fiber_map.json and legacy panel_map.json; parse entries and chip fields.""" """Load/save fiber_map.json; parse hub/port bindings and chip metadata."""
import json import json
import os import os
@ -6,15 +6,14 @@ import re
import sys import sys
import time import time
from hubmgr.adnacom_pcie_catalog import ( from fiwi.adnacom_pcie_catalog import (
ADNACOM_KNOWN_CARDS, ADNACOM_KNOWN_CARDS,
board_serial_tail, board_serial_tail,
pcie_from_card_and_lane, pcie_from_card_and_lane,
print_catalog_menu, print_catalog_menu,
short_bdf, short_bdf,
) )
from hubmgr.constants import PANEL_SLOTS from fiwi.paths import fiber_map_path
from hubmgr.paths import fiber_map_path, panel_map_path
def ensure_fiber_map_document(doc): def ensure_fiber_map_document(doc):
@ -29,39 +28,23 @@ def ensure_fiber_map_document(doc):
return out return out
def document_from_legacy_panel_array(slots):
fiber_ports = {}
for idx, slot in enumerate(slots):
if slot is not None:
fiber_ports[str(idx + 1)] = {"hub": slot[0], "port": slot[1]}
return {"fiber_ports": fiber_ports}
def load_fiber_map_document(): def load_fiber_map_document():
""" """Load ``fiber_map.json`` if present; return None if missing."""
Load routing map: prefer fiber_map.json (object keyed by fiber port).
If missing, migrate in-memory from legacy panel_map.json (24-slot array).
Returns None if neither file exists.
"""
fpath = fiber_map_path() fpath = fiber_map_path()
ppath = panel_map_path() if not os.path.isfile(fpath):
if os.path.isfile(fpath): return None
with open(fpath, encoding="utf-8") as f: with open(fpath, encoding="utf-8") as f:
return ensure_fiber_map_document(json.load(f)) return ensure_fiber_map_document(json.load(f))
if os.path.isfile(ppath):
slots = read_panel_map_file(ppath)
return ensure_fiber_map_document(document_from_legacy_panel_array(slots))
return None
def load_fiber_map_or_exit(): def load_fiber_map_or_exit():
doc = load_fiber_map_document() doc = load_fiber_map_document()
if doc is None: if doc is None:
print( print(
f"Missing {fiber_map_path()} (or legacy {panel_map_path()}).\n" f"Missing {fiber_map_path()}.\n"
" Copy fiber_map.example.json → fiber_map.json and set fiber_ports " " Copy fiber_map.example.json → fiber_map.json and set fiber_ports "
'(e.g. "5": {"hub": 1, "port": 2, "ssh": "pi@192.168.1.39"}). ' '(e.g. "5": {"hub": 1, "port": 2, "ssh": "pi@192.168.1.39"}). '
"Use ssh / remote / host+user when Acroname hubs are reached via SSH.", "Use ssh / remote / host+user when USB hubs are reached via SSH.",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
@ -76,7 +59,7 @@ def fiber_sort_key(key):
return (1, s) return (1, s)
def parse_panel_map_entry(entry): def parse_hub_port_entry(entry):
"""Return (hub_1based, port_0based) or None if unmapped / invalid.""" """Return (hub_1based, port_0based) or None if unmapped / invalid."""
if entry is None: if entry is None:
return None return None
@ -98,7 +81,7 @@ def parse_panel_map_entry(entry):
def fiber_entry_hub_port(entry): def fiber_entry_hub_port(entry):
return parse_panel_map_entry(entry) return parse_hub_port_entry(entry)
def fiber_ssh_target(entry): def fiber_ssh_target(entry):
@ -148,13 +131,31 @@ def stored_chip_preview(entry, width=26):
"""Short label for status tables from saved probe metadata.""" """Short label for status tables from saved probe metadata."""
if not isinstance(entry, dict): if not isinstance(entry, dict):
return "" return ""
for k in ("chip_type", "usb_description", "usb_id"):
v = entry.get(k) def _take(v):
if isinstance(v, str) and v.strip(): if not isinstance(v, str) or not v.strip():
return ""
s = v.strip().replace("\n", " ") s = v.strip().replace("\n", " ")
if len(s) > width: if len(s) > width:
return s[: width - 2] + ".." return s[: width - 2] + ".."
return s return s
for k in ("chip_type",):
got = _take(entry.get(k))
if got:
return got
wlan = entry.get("wlan")
if isinstance(wlan, dict):
prim = wlan.get("primary")
if isinstance(prim, dict):
for k in ("chip_label", "product"):
got = _take(prim.get(k))
if got:
return got
for k in ("usb_description", "usb_id"):
got = _take(entry.get(k))
if got:
return got
return "" return ""
@ -352,13 +353,3 @@ def prompt_pcie_metadata_for_calibrate(existing_pcie):
return ("keep", None) return ("keep", None)
def read_panel_map_file(path):
"""Load and normalize panel map JSON to a list of length PANEL_SLOTS."""
with open(path, encoding="utf-8") as f:
data = json.load(f)
if not isinstance(data, list):
raise ValueError("panel map must be a JSON array")
slots = [parse_panel_map_entry(x) for x in data]
while len(slots) < PANEL_SLOTS:
slots.append(None)
return slots[:PANEL_SLOTS]

94
fiwi/fiber_radio_port.py Normal file
View File

@ -0,0 +1,94 @@
"""
Fiber + radio port: central domain object for a row in fiber_map.json.
Power (Acroname hub downstream), SSH routing, PCIe / wlan / USB metadata all hang off this
aggregate. ``FiWiHarness`` supplies BrainStem power; not the conceptual center.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Dict, Iterator, List, Optional, Tuple
from fiwi import fiber_map_io as fm
from fiwi.ssh_node import SshNode
@dataclass
class FiberRadioPort:
"""
One logical fiber/radio attachment: ``fiber_ports[<map_key>]`` in the map document.
``entry`` is the live dict from the document (mutations persist when the doc is saved).
"""
map_key: str
entry: Optional[Dict[str, Any]]
@property
def port_id(self) -> Optional[int]:
"""Integer fiber id when ``map_key`` is decimal; else None."""
if self.map_key.isdigit():
return int(self.map_key)
return None
def hub_port(self) -> Optional[Tuple[int, int]]:
"""(hub_1based, port_0based) or None if unmapped / invalid."""
return fm.fiber_entry_hub_port(self.entry)
def ssh_target(self) -> Optional[str]:
"""user@host when this ports hubs are reached via SSH; else None."""
return fm.fiber_ssh_target(self.entry) if isinstance(self.entry, dict) else None
def ssh_node(self) -> Optional[SshNode]:
"""Remote Fi-Wi host (``SshNode``) for this port, or None when mapped locally."""
t = self.ssh_target()
if not t:
return None
try:
return SshNode.parse(t)
except ValueError:
return None
def is_mapped(self) -> bool:
"""True when hub.port is valid in the entry."""
return self.hub_port() is not None
def chip_preview(self, width: int = 26) -> str:
return fm.stored_chip_preview(self.entry) if isinstance(self.entry, dict) else ""
def pcie_preview(self, width: int = 22) -> str:
return fm.stored_pcie_preview(self.entry) if isinstance(self.entry, dict) else ""
@classmethod
def from_map_key(cls, doc: Dict[str, Any], map_key: str) -> FiberRadioPort:
ports = doc.get("fiber_ports") if isinstance(doc, dict) else None
if not isinstance(ports, dict):
return cls(str(map_key), None)
ent = ports.get(str(map_key))
return cls(
str(map_key),
ent if isinstance(ent, dict) else None,
)
@classmethod
def from_port_id(cls, doc: Dict[str, Any], port_id: int) -> FiberRadioPort:
return cls.from_map_key(doc, str(int(port_id)))
@staticmethod
def each_from_document(doc: Dict[str, Any]) -> Iterator[FiberRadioPort]:
"""All ``fiber_ports`` rows (sorted), including unmapped / empty values."""
ports = doc.get("fiber_ports") if isinstance(doc, dict) else None
if not isinstance(ports, dict):
return
for key in sorted(ports.keys(), key=fm.fiber_sort_key):
ent = ports[key]
yield FiberRadioPort(
str(key),
ent if isinstance(ent, dict) else None,
)
def load_fiber_radio_ports(doc: Dict[str, Any]) -> List[FiberRadioPort]:
"""Registry of all fiber map rows (sorted keys)."""
return list(FiberRadioPort.each_from_document(doc))

View File

@ -1,4 +1,4 @@
"""Acroname BrainStem hub manager: connect, power, panel/fiber map, calibrate.""" """Fi-Wi test harness: BrainStem USB power, fiber/radio map, patch panel, remote nodes."""
import asyncio import asyncio
import copy import copy
@ -8,15 +8,32 @@ import shutil
import sys import sys
import time import time
import hubmgr.brainstem_loader as stemmod import fiwi.brainstem_loader as stemmod
from hubmgr.constants import PANEL_SLOTS from fiwi.constants import PANEL_SLOTS
from hubmgr.paths import fiber_map_path from fiwi.patch_panel import PatchPanel, effective_panel_slots
from hubmgr import fiber_map_io as fm from fiwi.paths import fiber_map_path
from hubmgr import remote_ssh as rs from fiwi.fiber_radio_port import FiberRadioPort
from hubmgr import usb_probe as usb from fiwi import fiber_map_io as fm
from fiwi.ssh_node import (
SshNode,
SshNodeConfig,
parse_status_line_for_hub_port,
resolve_remote_defer,
)
from fiwi import usb_probe as usb
from fiwi.ieee80211_dev import discover_wireless_for_map, wlan_chip_and_interface
class AcronameManager: class FiWiHarness:
"""
Orchestrates Acroname/BrainStem power, ``FiberRadioPort`` / :class:`fiwi.ssh_node.SshNode` routing,
and calibration.
Local USB reboot staggering uses :mod:`asyncio`. Remote SSH work during ``panel calibrate``
(fetching hub/port lists and baseline power-off) runs concurrent ``SshNode`` coroutines via
:func:`asyncio.run` when multiple SSH hosts or ports are involved.
"""
def __init__(self): def __init__(self):
stemmod.load_brainstem() stemmod.load_brainstem()
self.hubs = [] self.hubs = []
@ -98,7 +115,7 @@ class AcronameManager:
if self.hubs and not first_pass_ok and not quiet: if self.hubs and not first_pass_ok and not quiet:
print( print(
"hub_manager: local hub(s) opened after retry (alternate USBHub3p → USBHub3c → USBHub2x4 order).", "fiwi: local hub(s) opened after retry (alternate USBHub3p → USBHub3c → USBHub2x4 order).",
flush=True, flush=True,
) )
@ -108,7 +125,7 @@ class AcronameManager:
if specs: if specs:
if quiet: if quiet:
print( print(
"hub_manager: local USB shows Acroname module(s) but BrainStem connectFromSpec failed " "fiwi: local USB shows Acroname module(s) but BrainStem connectFromSpec failed "
"(udev 24ff / stem type / library); continuing without local hubs.", "(udev 24ff / stem type / library); continuing without local hubs.",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
@ -126,7 +143,7 @@ class AcronameManager:
else: else:
if quiet: if quiet:
print( print(
"hub_manager: no local Acroname hubs found; continuing (e.g. --ssh calibrate).", "fiwi: no local Acroname hubs found; continuing (e.g. --ssh calibrate).",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
@ -351,14 +368,12 @@ class AcronameManager:
else: print(f"[ERROR] {link} not found.") else: print(f"[ERROR] {link} not found.")
def panel_status(self): def panel_status(self):
"""Rack positions 124: mapping and power via local hubs or per-entry ssh.""" """Rack positions 1…N (N from fiber_map patch_panel.slots or default): mapping and power."""
doc = fm.load_fiber_map_or_exit() doc = fm.load_fiber_map_or_exit()
ports = doc["fiber_ports"] n_slots = effective_panel_slots(doc)
slot_frp = [FiberRadioPort.from_port_id(doc, n) for n in range(1, n_slots + 1)]
need_local = any( need_local = any(
fm.fiber_entry_hub_port(ports.get(str(n))) is not None x.hub_port() is not None and x.ssh_target() is None for x in slot_frp
and fm.fiber_ssh_target(ports.get(str(n)) if isinstance(ports.get(str(n)), dict) else None)
is None
for n in range(1, PANEL_SLOTS + 1)
) )
if need_local and not self.hubs and not self.connect(): if need_local and not self.hubs and not self.connect():
return return
@ -368,14 +383,12 @@ class AcronameManager:
flush=True, flush=True,
) )
print("-" * 120) print("-" * 120)
for idx in range(PANEL_SLOTS): for idx, frp in enumerate(slot_frp):
panel_n = idx + 1 panel_n = idx + 1
key = str(panel_n) tup = frp.hub_port()
entry = ports.get(key) ssh = frp.ssh_target()
tup = fm.fiber_entry_hub_port(entry) if entry is not None else None chip_s = frp.chip_preview()
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None pcie_s = frp.pcie_preview()
chip_s = fm.stored_chip_preview(entry)
pcie_s = fm.stored_pcie_preview(entry)
if tup is None: if tup is None:
print( print(
f"{panel_n:<7} | {'':<10} | {'':<18} | {'':<5} | {'':<8} | {chip_s:<28} | {pcie_s:<22}", f"{panel_n:<7} | {'':<10} | {'':<18} | {'':<5} | {'':<8} | {chip_s:<28} | {pcie_s:<22}",
@ -385,13 +398,16 @@ class AcronameManager:
hub_1, port_0 = tup hub_1, port_0 = tup
route = (ssh if ssh else "local")[:18] route = (ssh if ssh else "local")[:18]
if ssh: if ssh:
code, out, err = rs.ssh_forward_capture(ssh, ["status", f"{hub_1}.{port_0}"]) code, out, err = SshNode.parse(ssh).invoke_capture(
["status", f"{hub_1}.{port_0}"],
defer=False,
)
if code != 0: if code != 0:
pwr, cur = "?", "?" pwr, cur = "?", "?"
if err.strip(): if err.strip():
route = f"{ssh} (err)"[:18] route = f"{ssh} (err)"[:18]
else: else:
pwr, cur = rs.parse_status_line_for_hub_port(out, hub_1, port_0) pwr, cur = parse_status_line_for_hub_port(out, hub_1, port_0)
print( print(
f"{panel_n:<7} | {hub_1}.{port_0:<9} | {route:<18} | {pwr:<5} | {cur:<8} | " f"{panel_n:<7} | {hub_1}.{port_0:<9} | {route:<18} | {pwr:<5} | {cur:<8} | "
f"{chip_s:<28} | {pcie_s:<22}", f"{chip_s:<28} | {pcie_s:<22}",
@ -426,24 +442,30 @@ class AcronameManager:
) )
def panel_power(self, mode, panel_1based): def panel_power(self, mode, panel_1based):
if panel_1based < 1 or panel_1based > PANEL_SLOTS:
print(f"Panel port must be 1{PANEL_SLOTS}, got {panel_1based}", file=sys.stderr, flush=True)
sys.exit(1)
doc = fm.load_fiber_map_or_exit() doc = fm.load_fiber_map_or_exit()
key = str(panel_1based) n_slots = effective_panel_slots(doc)
entry = doc["fiber_ports"].get(key) if panel_1based < 1 or panel_1based > n_slots:
tup = fm.fiber_entry_hub_port(entry) if entry is not None else None print(f"Panel port must be 1{n_slots}, got {panel_1based}", file=sys.stderr, flush=True)
if tup is None: sys.exit(1)
frp = FiberRadioPort.from_port_id(doc, panel_1based)
if not frp.is_mapped():
print( print(
f"Panel {panel_1based} is not mapped (no fiber_ports[{key!r}] in fiber_map.json).", f"Panel {panel_1based} is not mapped (no fiber_ports[{frp.map_key!r}] in fiber_map.json).",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
sys.exit(1) sys.exit(1)
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None ssh = frp.ssh_target()
if ssh: if ssh:
sys.exit(rs.ssh_forward(ssh, ["panel", mode, str(panel_1based)])) sys.exit(
tgt = f"{tup[0]}.{tup[1]}" SshNode.parse(ssh).invoke(
["panel", mode, str(panel_1based)],
defer=False,
)
)
hub_1, port_0 = frp.hub_port()
assert hub_1 is not None and port_0 is not None
tgt = f"{hub_1}.{port_0}"
print(f"Panel {panel_1based} → hub target {tgt} ({mode})", flush=True) print(f"Panel {panel_1based} → hub target {tgt} ({mode})", flush=True)
if not self.power(mode, tgt): if not self.power(mode, tgt):
sys.exit(1) sys.exit(1)
@ -451,45 +473,53 @@ class AcronameManager:
def fiber_power(self, mode, fiber_port): def fiber_power(self, mode, fiber_port):
"""Power via fiber_map.json fiber_ports key (any positive integer id).""" """Power via fiber_map.json fiber_ports key (any positive integer id)."""
doc = fm.load_fiber_map_or_exit() doc = fm.load_fiber_map_or_exit()
key = str(int(fiber_port)) frp = FiberRadioPort.from_port_id(doc, int(fiber_port))
entry = doc["fiber_ports"].get(key) if not frp.is_mapped():
tup = fm.fiber_entry_hub_port(entry) if entry is not None else None
if tup is None:
print( print(
f"Fiber port {fiber_port} is not mapped (missing fiber_ports[{key!r}] in fiber_map.json).", f"Fiber port {fiber_port} is not mapped (missing fiber_ports[{frp.map_key!r}] in fiber_map.json).",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
sys.exit(1) sys.exit(1)
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None ssh = frp.ssh_target()
if ssh: if ssh:
sys.exit(rs.ssh_forward(ssh, ["power", "fiber-port", key, mode.lower()])) sys.exit(
tgt = f"{tup[0]}.{tup[1]}" SshNode.parse(ssh).invoke(
["power", "fiber-port", frp.map_key, mode.lower()],
defer=False,
)
)
hub_1, port_0 = frp.hub_port()
assert hub_1 is not None and port_0 is not None
tgt = f"{hub_1}.{port_0}"
print(f"Fiber port {fiber_port} → hub target {tgt} ({mode})", flush=True) print(f"Fiber port {fiber_port} → hub target {tgt} ({mode})", flush=True)
if not self.power(mode, tgt): if not self.power(mode, tgt):
sys.exit(1) sys.exit(1)
def fiber_chip(self, fiber_port, save=False): def fiber_chip(self, fiber_port, save=False):
""" """
Identify newly enumerated USB device(s) on this fibers hub port (lsusb diff). Local hubs only: lsusb diff on the mapped USB downstream port (usb_id / chip_type in map).
If the port was off, turns it on briefly, snapshots, then restores previous power. SSH-mapped fibers use PCIe metadata from calibrate / fiber_map.json not forwarded.
With save=True, merge usb_id / chip_type / usb_lsusb_lines into fiber_map.json when new lines appear.
""" """
doc = fm.load_fiber_map_or_exit() doc = fm.load_fiber_map_or_exit()
key = str(int(fiber_port)) frp = FiberRadioPort.from_port_id(doc, int(fiber_port))
entry = doc["fiber_ports"].get(key) key = frp.map_key
tup = fm.fiber_entry_hub_port(entry) if entry is not None else None if not frp.is_mapped():
if tup is None:
print( print(
f"Fiber port {fiber_port} is not mapped (missing fiber_ports[{key!r}] in fiber_map.json).", f"Fiber port {fiber_port} is not mapped (missing fiber_ports[{key!r}] in fiber_map.json).",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
sys.exit(1) sys.exit(1)
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None ssh = frp.ssh_target()
if ssh: if ssh:
extra = ["save"] if save else [] print(
sys.exit(rs.ssh_forward(ssh, ["fiber", "chip", key, *extra])) "fiber chip: this fiber is SSH-mapped (PCIe/fiber path). Use panel calibrate PCIe prompts "
"or edit fiber_map.json; lsusb chip probe is not used for remote-mapped ports.",
file=sys.stderr,
flush=True,
)
sys.exit(2)
if not shutil.which("lsusb"): if not shutil.which("lsusb"):
print( print(
"lsusb not found in PATH; install usbutils (e.g. usbutils package) on this host.", "lsusb not found in PATH; install usbutils (e.g. usbutils package) on this host.",
@ -499,7 +529,8 @@ class AcronameManager:
sys.exit(1) sys.exit(1)
if not self.hubs and not self.connect(): if not self.hubs and not self.connect():
return return
hub_1, port_0 = tup hub_1, port_0 = frp.hub_port()
assert hub_1 is not None and port_0 is not None
h_idx = hub_1 - 1 h_idx = hub_1 - 1
if h_idx < 0 or h_idx >= len(self.hubs): if h_idx < 0 or h_idx >= len(self.hubs):
print(f"No hub {hub_1} connected.", file=sys.stderr, flush=True) print(f"No hub {hub_1} connected.", file=sys.stderr, flush=True)
@ -562,8 +593,7 @@ class AcronameManager:
def fiber_map_status(self): def fiber_map_status(self):
"""All fiber_ports entries with hub.port and live power (local BrainStem or ssh status).""" """All fiber_ports entries with hub.port and live power (local BrainStem or ssh status)."""
doc = fm.load_fiber_map_or_exit() doc = fm.load_fiber_map_or_exit()
ports = doc["fiber_ports"] all_frp = list(FiberRadioPort.each_from_document(doc))
keys = sorted(ports.keys(), key=fm.fiber_sort_key)
print( print(
f"{'Fiber':<8} | {'Hub.Port':<10} | {'Route':<18} | {'Pwr':<5} | {'mA':<8} | " f"{'Fiber':<8} | {'Hub.Port':<10} | {'Route':<18} | {'Pwr':<5} | {'mA':<8} | "
f"{'Chip (saved)':<28} | {'PCIe (saved)':<22}", f"{'Chip (saved)':<28} | {'PCIe (saved)':<22}",
@ -571,18 +601,16 @@ class AcronameManager:
) )
print("-" * 120) print("-" * 120)
need_local = any( need_local = any(
fm.fiber_entry_hub_port(ports[k]) is not None frp.hub_port() is not None and frp.ssh_target() is None for frp in all_frp
and not fm.fiber_ssh_target(ports[k] if isinstance(ports[k], dict) else None)
for k in keys
) )
if need_local and not self.hubs and not self.connect(): if need_local and not self.hubs and not self.connect():
return return
for key in keys: for frp in all_frp:
entry = ports[key] key = frp.map_key
tup = fm.fiber_entry_hub_port(entry) tup = frp.hub_port()
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None ssh = frp.ssh_target()
chip_s = fm.stored_chip_preview(entry) chip_s = frp.chip_preview()
pcie_s = fm.stored_pcie_preview(entry) pcie_s = frp.pcie_preview()
if tup is None: if tup is None:
print( print(
f"{key!s:<8} | {'':<10} | {'':<18} | {'':<5} | {'':<8} | {chip_s:<28} | {pcie_s:<22}", f"{key!s:<8} | {'':<10} | {'':<18} | {'':<5} | {'':<8} | {chip_s:<28} | {pcie_s:<22}",
@ -592,13 +620,16 @@ class AcronameManager:
hub_1, port_0 = tup hub_1, port_0 = tup
route = (ssh if ssh else "local")[:18] route = (ssh if ssh else "local")[:18]
if ssh: if ssh:
code, out, err = rs.ssh_forward_capture(ssh, ["status", f"{hub_1}.{port_0}"]) code, out, err = SshNode.parse(ssh).invoke_capture(
["status", f"{hub_1}.{port_0}"],
defer=False,
)
if code != 0: if code != 0:
pwr, cur = "?", "?" pwr, cur = "?", "?"
if err.strip(): if err.strip():
route = f"{ssh} (err)"[:18] route = f"{ssh} (err)"[:18]
else: else:
pwr, cur = rs.parse_status_line_for_hub_port(out, hub_1, port_0) pwr, cur = parse_status_line_for_hub_port(out, hub_1, port_0)
print( print(
f"{key!s:<8} | {hub_1}.{port_0:<9} | {route:<18} | {pwr:<5} | {cur:<8} | " f"{key!s:<8} | {hub_1}.{port_0:<9} | {route:<18} | {pwr:<5} | {cur:<8} | "
f"{chip_s:<28} | {pcie_s:<22}", f"{chip_s:<28} | {pcie_s:<22}",
@ -633,25 +664,31 @@ class AcronameManager:
) )
def panel_reboot(self, panel_1based, skip_empty=True): def panel_reboot(self, panel_1based, skip_empty=True):
if panel_1based < 1 or panel_1based > PANEL_SLOTS:
print(f"Panel port must be 1{PANEL_SLOTS}, got {panel_1based}", file=sys.stderr, flush=True)
sys.exit(1)
doc = fm.load_fiber_map_or_exit() doc = fm.load_fiber_map_or_exit()
key = str(panel_1based) n_slots = effective_panel_slots(doc)
entry = doc["fiber_ports"].get(key) if panel_1based < 1 or panel_1based > n_slots:
tup = fm.fiber_entry_hub_port(entry) if entry is not None else None print(f"Panel port must be 1{n_slots}, got {panel_1based}", file=sys.stderr, flush=True)
if tup is None: sys.exit(1)
frp = FiberRadioPort.from_port_id(doc, panel_1based)
if not frp.is_mapped():
print( print(
f"Panel {panel_1based} is not mapped (no fiber_ports[{key!r}] in fiber_map.json).", f"Panel {panel_1based} is not mapped (no fiber_ports[{frp.map_key!r}] in fiber_map.json).",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
sys.exit(1) sys.exit(1)
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None ssh = frp.ssh_target()
sub = "reboot" if skip_empty else "reboot-force" sub = "reboot" if skip_empty else "reboot-force"
if ssh: if ssh:
sys.exit(rs.ssh_forward(ssh, ["panel", sub, str(panel_1based)])) sys.exit(
tgt = f"{tup[0]}.{tup[1]}" SshNode.parse(ssh).invoke(
["panel", sub, str(panel_1based)],
defer=False,
)
)
hub_1, port_0 = frp.hub_port()
assert hub_1 is not None and port_0 is not None
tgt = f"{hub_1}.{port_0}"
print(f"Panel {panel_1based} → hub target {tgt} ({sub})", flush=True) print(f"Panel {panel_1based} → hub target {tgt} ({sub})", flush=True)
self.reboot(tgt, skip_empty=skip_empty) self.reboot(tgt, skip_empty=skip_empty)
@ -681,7 +718,9 @@ class AcronameManager:
flush=True, flush=True,
) )
else: else:
rs.remote_hub_port_power(ssh_host, hub_1, port_0, False) SshNode.parse(ssh_host).remote_hub_port_power(
hub_1, port_0, False, defer=False
)
print( print(
f">>> OFF {ssh_host} hub {hub_1} USB port {port_0}", f">>> OFF {ssh_host} hub {hub_1} USB port {port_0}",
flush=True, flush=True,
@ -710,10 +749,78 @@ class AcronameManager:
f.write("\n") f.write("\n")
print(f"Wrote {path}", flush=True) print(f"Wrote {path}", flush=True)
def _prompt_patch_panel(self, doc: dict) -> PatchPanel:
"""
Field workflow: define the physical patch panel before any USB hub walk.
Persists under doc['patch_panel']; map fiber ids 1slots align with panel numbers.
"""
have = PatchPanel.from_map_blob(doc.get("patch_panel"))
print(
"\n--- Patch panel (front-panel positions) ---\n"
f"Fiber map keys 1…N refer to these panel positions (power/status: panel <N>).\n",
flush=True,
)
if have is not None:
print(f"Current map: {have.slots} position(s).", flush=True)
try:
line = input(" [Enter]=keep, or type new position count: ").strip()
except EOFError:
line = ""
if not line:
n = have.slots
else:
try:
n = int(line)
if n < 1 or n > 256:
print(f" Invalid; keeping {have.slots}.", flush=True)
n = have.slots
except ValueError:
print(f" Invalid; keeping {have.slots}.", flush=True)
n = have.slots
else:
print(f"No patch_panel in map yet; default is {PANEL_SLOTS}.", flush=True)
try:
line = input(
f" How many front-panel positions? [{PANEL_SLOTS}]: "
).strip()
except EOFError:
line = ""
if not line:
n = PANEL_SLOTS
else:
try:
n = int(line)
if n < 1 or n > 256:
print(f" Invalid; using {PANEL_SLOTS}.", flush=True)
n = PANEL_SLOTS
except ValueError:
print(f" Invalid; using {PANEL_SLOTS}.", flush=True)
n = PANEL_SLOTS
label = ""
if have and have.label:
label = have.label
try:
lab_in = input(
" Optional panel label [Enter to skip]: "
).strip()
except EOFError:
lab_in = ""
if lab_in:
label = lab_in
panel = PatchPanel(slots=n, label=label)
doc["patch_panel"] = panel.to_map_blob()
print(
f" Patch panel set: {panel.slots} position(s)"
+ (f" ({panel.label})" if panel.label else "")
+ ".\n---\n",
flush=True,
)
return panel
def panel_calibrate(self, merge=False, limit=None, calibrate_ssh_hosts=None): def panel_calibrate(self, merge=False, limit=None, calibrate_ssh_hosts=None):
""" """
Walk downstream USB ports in hub order: local hubs first, then each --ssh / calibrate_remotes host. Prompts for patch panel size first (field workflow), writes fiber_map.json, then walks USB hub ports
You type the fiber port id for each step; writes one fiber_map.json (adds ssh on remote steps). (local then --ssh). You assign each step to a fiber id (panel position 1N).
""" """
calibrate_ssh_hosts = list(calibrate_ssh_hosts or []) calibrate_ssh_hosts = list(calibrate_ssh_hosts or [])
try: try:
@ -730,6 +837,9 @@ class AcronameManager:
doc = {"fiber_ports": {}} doc = {"fiber_ports": {}}
doc = fm.ensure_fiber_map_document(doc) doc = fm.ensure_fiber_map_document(doc)
self._prompt_patch_panel(doc)
self._write_fiber_map_document(doc)
seen_h = set() seen_h = set()
cli_hosts = [] cli_hosts = []
for h in calibrate_ssh_hosts: for h in calibrate_ssh_hosts:
@ -745,8 +855,7 @@ class AcronameManager:
seen_h.add(s) seen_h.add(s)
cli_hosts.append(s) cli_hosts.append(s)
rs.apply_remote_ssh_env_file() env_rem = SshNodeConfig.load().calibrate_remotes
env_rem = os.environ.get("HUB_MANAGER_CALIBRATE_REMOTES", "").strip()
if env_rem: if env_rem:
added_from_env = [] added_from_env = []
for part in env_rem.split(","): for part in env_rem.split(","):
@ -757,7 +866,7 @@ class AcronameManager:
added_from_env.append(s) added_from_env.append(s)
if added_from_env: if added_from_env:
print( print(
f"hub_manager: calibrate remotes from HUB_MANAGER_CALIBRATE_REMOTES: " f"fiwi: calibrate remotes from FIWI_CALIBRATE_REMOTES: "
f"{', '.join(added_from_env)}", f"{', '.join(added_from_env)}",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
@ -774,10 +883,10 @@ class AcronameManager:
saw_specs_connect_failed = True saw_specs_connect_failed = True
if not local_ok and cli_hosts: if not local_ok and cli_hosts:
print( print(
"hub_manager: Local USB shows Acroname module(s) but no hub opened — " "fiwi: Local USB shows Acroname module(s) but no hub opened — "
"this calibrate run will skip local ports and use only --ssh.\n" "this calibrate run will skip local ports and use only --ssh.\n"
" Fix Fedora access, then re-run to include local + Pi in one pass:\n" " Fix Fedora access, then re-run to include local + Pi in one pass:\n"
" python3 hub_manager.py setup && sudo install -m 0644 99-acroname.rules /etc/udev/rules.d/\n" " python3 fiwi.py setup && sudo install -m 0644 99-acroname.rules /etc/udev/rules.d/\n"
" (setup needs a working connect; if it still fails, generic vendor rule:)\n" " (setup needs a working connect; if it still fails, generic vendor rule:)\n"
' echo \'SUBSYSTEM=="usb", ATTR{idVendor}=="24ff", MODE="0666"\' | sudo tee /etc/udev/rules.d/99-acroname.rules\n' ' echo \'SUBSYSTEM=="usb", ATTR{idVendor}=="24ff", MODE="0666"\' | sudo tee /etc/udev/rules.d/99-acroname.rules\n'
" sudo udevadm control --reload-rules && sudo udevadm trigger\n" " sudo udevadm control --reload-rules && sudo udevadm trigger\n"
@ -792,14 +901,28 @@ class AcronameManager:
steps = [] steps = []
for hub_1, port_0 in local_ordered: for hub_1, port_0 in local_ordered:
steps.append((None, hub_1, port_0)) steps.append((None, hub_1, port_0))
for host in cli_hosts: if cli_hosts:
remote_pairs = rs.fetch_calibrate_ports_json(host) use_def = resolve_remote_defer(None)
if use_def:
handles = [
SshNode.parse(h).fetch_calibrate_ports_json(defer=True)
for h in cli_hosts
]
remote_results = [h.result() for h in handles]
else:
remote_results = [
SshNode.parse(h).fetch_calibrate_ports_json(defer=False)
for h in cli_hosts
]
else:
remote_results = []
for host, remote_pairs in zip(cli_hosts, remote_results):
if not remote_pairs: if not remote_pairs:
print( print(
f"hub_manager: WARNING: 0 remote calibrate steps from {host!r} — SSH hub_manager returned " f"fiwi: WARNING: 0 remote calibrate steps from {host!r} — SSH fiwi returned "
"no port list (often: Pi used system python3 without brainstem). On this PC set " "no port list (often: Pi used system python3 without brainstem). On this PC set "
"HUB_MANAGER_REMOTE_PYTHON and HUB_MANAGER_REMOTE_SCRIPT in remote_ssh.env to paths " "FIWI_REMOTE_PYTHON and FIWI_REMOTE_SCRIPT in remote_ssh.env to paths "
"that exist on the Pi (venv python3 + hub_manager.py). See remote_ssh.env.example.", "that exist on the Pi (venv python3 + fiwi.py). See remote_ssh.env.example.",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
@ -812,15 +935,15 @@ class AcronameManager:
if not steps: if not steps:
if saw_specs_connect_failed and not cli_hosts: if saw_specs_connect_failed and not cli_hosts:
print( print(
"hub_manager: This PC sees Acroname USB module(s) in BrainStem discovery but connectFromSpec " "fiwi: This PC sees Acroname USB module(s) in BrainStem discovery but connectFromSpec "
"failed, and no remote host was given for calibrate.\n" "failed, and no remote host was given for calibrate.\n"
" If your hubs are on a Raspberry Pi (or another machine), run from here:\n" " If your hubs are on a Raspberry Pi (or another machine), run from here:\n"
" python3 hub_manager.py panel calibrate merge --ssh pi@<pi-address>\n" " python3 fiwi.py panel calibrate merge --ssh pi@<pi-address>\n"
" (repeat --ssh for each host). Or put in fiber_map.json next to hub_manager.py:\n" " (repeat --ssh for each host). Or put in fiber_map.json next to fiwi.py:\n"
' "calibrate_remotes": ["pi@<pi-address>"]\n' ' "calibrate_remotes": ["pi@<pi-address>"]\n'
" Use remote_ssh.env on this PC if the Pi uses a venv path for python / script.\n" " Use remote_ssh.env on this PC if the Pi uses a venv path for python / script.\n"
" If the hubs are really plugged into *this* Fedora box, fix USB access (udev 24ff, plugdev, " " If the hubs are really plugged into *this* Fedora box, fix USB access (udev 24ff, plugdev, "
"unplug/replug) until `python3 hub_manager.py discover` opens them.", "unplug/replug) until `python3 fiwi.py discover` opens them.",
file=sys.stderr, file=sys.stderr,
flush=True, flush=True,
) )
@ -849,15 +972,40 @@ class AcronameManager:
for sh, _, _ in steps: for sh, _, _ in steps:
if sh is not None and sh not in remote_hosts_ordered: if sh is not None and sh not in remote_hosts_ordered:
remote_hosts_ordered.append(sh) remote_hosts_ordered.append(sh)
if remote_hosts_ordered:
use_def = resolve_remote_defer(None)
by_host = {h: [] for h in remote_hosts_ordered}
if use_def:
meta_off: list[tuple[str, int, int]] = []
handles = []
for host in remote_hosts_ordered: for host in remote_hosts_ordered:
pairs = sorted({(h, p) for s_h, h, p in steps if s_h == host}) pairs = sorted({(h, p) for s_h, h, p in steps if s_h == host})
n_off = len(pairs) node = SshNode.parse(host)
for h, p in pairs:
meta_off.append((host, h, p))
handles.append(node.remote_hub_port_power(h, p, False, defer=True))
res_off = [h.result() for h in handles]
for (host, h, p), r in zip(meta_off, res_off):
by_host[host].append(((h, p), r))
else:
for host in remote_hosts_ordered:
pairs = sorted({(h, p) for s_h, h, p in steps if s_h == host})
for h, p in pairs:
r = SshNode.parse(host).remote_hub_port_power(
h, p, False, defer=False
)
by_host[host].append(((h, p), r))
for host in remote_hosts_ordered:
chunk = by_host.get(host, [])
n_off = len(chunk)
mode_s = "concurrent (deferred)" if use_def else "sequential"
print( print(
f"Turning OFF every downstream USB port on {host} (baseline, {n_off} SSH round trip(s))…", f"Turning OFF every downstream USB port on {host} "
f"(baseline, {n_off} SSH call(s), {mode_s})…",
flush=True, flush=True,
) )
for i, (h, p) in enumerate(pairs, start=1): for i, ((hp, rp), (rc, rerr)) in enumerate(chunk, start=1):
rc, rerr = rs.remote_hub_port_power(host, h, p, False) h, p = hp
if rc != 0: if rc != 0:
print( print(
f" [{i}/{n_off}] off {h}.{p} → exit {rc}: {(rerr or '').strip()[:120]}", f" [{i}/{n_off}] off {h}.{p} → exit {rc}: {(rerr or '').strip()[:120]}",
@ -868,14 +1016,20 @@ class AcronameManager:
time.sleep(0.6) time.sleep(0.6)
print(f" Remote baseline done for {host}.", flush=True) print(f" Remote baseline done for {host}.", flush=True)
n_panel = effective_panel_slots(doc)
print( print(
f"Fiber map calibrate: {len(steps)} step(s) — {n_loc} local, {n_rem} via ssh.\n" f"Fiber map calibrate: patch panel {n_panel} position(s); "
f"{len(steps)} USB step(s) — {n_loc} local, {n_rem} via ssh.\n"
"All downstream ports were turned OFF first so only one port is ON per step.\n" "All downstream ports were turned OFF first so only one port is ON per step.\n"
"Order: all local hub ports (hub 1 port 0 first), then each --ssh hosts ports in order.\n" "Order: all local hub ports (hub 1 port 0 first), then each --ssh hosts ports in order.\n"
"Each step: lsusb snapshot (port OFF) → ON (~2s) → new lsusb lines (chip hint) → fiber id, s=skip, q=quit.\n" "Assign each powered device to a fiber id 1…"
f"{n_panel} (panel position); ids outside that range are allowed if you need extras.\n"
"After ON (~2s): fiwi snapshots wireless interfaces on that host (sysfs + lspci/iw) — "
"local and SSH — for chip/interface in the map (no external fiwi script).\n"
"Local steps: lsusb OFF→ON may also suggest a USB downstream device; USB lsusb is not used on SSH hosts.\n"
"Each step: ON → wlan snapshot → fiber id, s=skip, q=quit.\n"
"Port stays ON through optional PCIe prompts, then powers OFF. Ctrl-C anytime saves fiber_map.json and exits.\n" "Port stays ON through optional PCIe prompts, then powers OFF. Ctrl-C anytime saves fiber_map.json and exits.\n"
"When you map a fiber, usb_id / chip_type are saved if new lsusb lines appeared.\n" "Remote rows: ssh + hub.port + wlan + pcie (usb_id/chip_type from lsusb are cleared; chip_type may come from wlan).\n"
"Remote steps store ssh in fiber_map.json automatically.\n"
"After each fiber id you can pick PCIe by number: 16 = known Adnacom H3 card, then SFP 14 " "After each fiber id you can pick PCIe by number: 16 = known Adnacom H3 card, then SFP 14 "
"(no paste), or m=manual / c=clear / Enter=keep. Edit fiber_map.json anytime (see example).", "(no paste), or m=manual / c=clear / Enter=keep. Edit fiber_map.json anytime (see example).",
flush=True, flush=True,
@ -884,12 +1038,10 @@ class AcronameManager:
ports = doc["fiber_ports"] ports = doc["fiber_ports"]
for ssh_host, hub_1, port_0 in steps: for ssh_host, hub_1, port_0 in steps:
route = "local" if ssh_host is None else ssh_host route = "local" if ssh_host is None else ssh_host
before_lsusb = ( # Local: lsusb diff can hint USB downstream devices. Remote: PCIe/fiber — no lsusb (avoids misleading chip_type).
usb.lsusb_lines() before_lsusb = usb.lsusb_lines() if ssh_host is None else []
if ssh_host is None
else rs.remote_lsusb_lines(ssh_host)
)
chip_hint_lines = [] chip_hint_lines = []
wlan_blob = None
step_powered = False step_powered = False
line = "" line = ""
print(f"\n>>> ON {route} hub {hub_1} USB port {port_0}", flush=True) print(f"\n>>> ON {route} hub {hub_1} USB port {port_0}", flush=True)
@ -899,7 +1051,9 @@ class AcronameManager:
print(f" {self._port_power_feedback(hub_1, port_0)}", flush=True) print(f" {self._port_power_feedback(hub_1, port_0)}", flush=True)
step_powered = True step_powered = True
else: else:
rc, rmsg = rs.remote_hub_port_power(ssh_host, hub_1, port_0, True) rc, rmsg = SshNode.parse(ssh_host).remote_hub_port_power(
hub_1, port_0, True, defer=False
)
if rc != 0: if rc != 0:
snippet = (rmsg or "").strip()[:800] snippet = (rmsg or "").strip()[:800]
print( print(
@ -910,23 +1064,30 @@ class AcronameManager:
step_powered = False step_powered = False
else: else:
time.sleep(0.25) time.sleep(0.25)
print(f" {rs.remote_port_power_feedback(ssh_host, hub_1, port_0)}", flush=True) print(
f" {SshNode.parse(ssh_host).remote_port_power_feedback(hub_1, port_0, defer=False)}",
flush=True,
)
step_powered = True step_powered = True
time.sleep(2.0) time.sleep(2.0)
if ssh_host is not None and not step_powered: if ssh_host is not None and not step_powered:
print( print(
" Skipping this calibrate step (remote ON failed; fix SSH / Pi hub_manager exit code).", " Skipping this calibrate step (remote ON failed; fix SSH / Pi fiwi exit code).",
flush=True, flush=True,
) )
continue continue
after_lsusb = ( if step_powered:
usb.lsusb_lines() if ssh_host is None:
if ssh_host is None wlan_blob = discover_wireless_for_map()
else rs.remote_lsusb_lines(ssh_host) else:
wlan_blob = SshNode.parse(ssh_host).remote_wlan_info_json(
defer=False
) )
if ssh_host is None:
after_lsusb = usb.lsusb_lines()
chip_hint_lines = usb.lsusb_new_devices(before_lsusb, after_lsusb) chip_hint_lines = usb.lsusb_new_devices(before_lsusb, after_lsusb)
if chip_hint_lines: if chip_hint_lines:
print(" USB / chip (lsusb lines new vs port OFF):", flush=True) print(" USB (lsusb new vs port OFF):", flush=True)
for ln in chip_hint_lines: for ln in chip_hint_lines:
print(f" {ln}", flush=True) print(f" {ln}", flush=True)
elif before_lsusb or after_lsusb: elif before_lsusb or after_lsusb:
@ -936,7 +1097,66 @@ class AcronameManager:
) )
else: else:
print( print(
" (lsusb unavailable — install usbutils on this machine / Pi.)", " (lsusb unavailable — install usbutils on this host.)",
flush=True,
)
else:
print(
" Remote step: no USB lsusb; use PCIe prompts after fiber id. "
"Wlan snapshot runs on the SSH host (fiwi wlan-info-json).",
flush=True,
)
if step_powered:
if not wlan_blob and ssh_host:
print(
" (No wlan data from remote — deploy fiwi with wlan-info-json, "
"or SSH/json failed.)",
flush=True,
)
elif wlan_blob and isinstance(wlan_blob, dict):
prim = wlan_blob.get("primary")
ifaces = wlan_blob.get("interfaces") or {}
if isinstance(prim, dict) and prim.get("interface"):
bits = [
prim.get("iface_mode") or "",
prim.get("operstate") or "",
prim.get("connection_type") or "",
prim.get("pci_address") or "",
prim.get("driver") or "",
]
if prim.get("mac_address"):
bits.append(prim["mac_address"])
if prim.get("chanspec"):
bits.append(prim["chanspec"])
if prim.get("bands_ghz"):
bits.append(f"bands {prim.get('bands_ghz')}")
bl = (
prim.get("chip_label")
or prim.get("product")
or ""
)
extra = f" ({', '.join(x for x in bits if x)})" if any(bits) else ""
print(
f" Radio: {prim.get('interface')}{bl[:72]}{extra}",
flush=True,
)
if len(ifaces) > 1:
others = sorted(k for k in ifaces if k != prim.get("interface"))
if others:
print(f" Other wlan: {', '.join(others)}", flush=True)
elif ifaces:
print(
f" Radio: {len(ifaces)} wireless interface(s) on host (see wlan in map).",
flush=True,
)
elif ssh_host:
print(
" (Remote host: no wireless NIC seen in wlan snapshot.)",
flush=True,
)
else:
print(
" (No wireless interfaces found under /sys/class/net.)",
flush=True, flush=True,
) )
try: try:
@ -1012,6 +1232,29 @@ class AcronameManager:
): ):
entry.pop(k, None) entry.pop(k, None)
entry.update(fm.chip_fields_from_lsusb_lines(chip_hint_lines)) entry.update(fm.chip_fields_from_lsusb_lines(chip_hint_lines))
if ssh_host:
for k in (
"usb_lsusb_lines",
"usb_id",
"usb_ids",
"chip_type",
"chip_profiled_at",
):
entry.pop(k, None)
if (
step_powered
and wlan_blob
and isinstance(wlan_blob, dict)
and wlan_blob.get("interfaces")
):
entry["wlan"] = wlan_blob
chip_guess, if_guess = wlan_chip_and_interface(wlan_blob)
if if_guess:
entry["radio_interface"] = if_guess
elif "radio_interface" in entry:
entry.pop("radio_interface", None)
if chip_guess and (ssh_host or not chip_hint_lines):
entry["chip_type"] = chip_guess
try: try:
action, pdata = fm.prompt_pcie_metadata_for_calibrate(entry.get("pcie")) action, pdata = fm.prompt_pcie_metadata_for_calibrate(entry.get("pcie"))
except KeyboardInterrupt: except KeyboardInterrupt:

768
fiwi/ieee80211_dev.py Normal file
View File

@ -0,0 +1,768 @@
"""
IEEE 802.11 device identity on the host: sysfs (class net / phy80211) plus optional lspci / iw.
Module name ``ieee80211_dev`` mirrors kernel-style ``80211_dev`` naming; a leading digit is invalid in Python imports.
Types:
WirelessInterface static PCI/USB identity + dynamic sysfs/iw/ethtool via @property
WirelessSnapshot full host scan (interfaces + primary + timestamp)
Dynamic properties (cached per instance until ``refresh_runtime()``): ``mac_address``, ``operstate``,
``is_up``, ``carrier_up``, ``bssid``, ``iface_mode`` (AP/STA/), ``is_ap`` / ``is_sta``,
``firmware_version``, ``driver_version`` (ethtool), ``channel``, ``center_freq_mhz``,
``channel_width_mhz``, ``center1_mhz``, ``chanspec``. The sysfs ``driver`` module name remains a field.
Serializes to fiber_map ``fiber_ports[].wlan``; remote capture via ``fiwi.py wlan-info-json``.
"""
from __future__ import annotations
import os
import re
import subprocess
import time
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Tuple
SYS_NET = "/sys/class/net"
# Keys stored under WirelessInterface runtime blob (JSON round-trip + live discovery)
_RUNTIME_JSON_KEYS = frozenset(
{
"mac_address",
"operstate",
"is_up",
"carrier_up",
"bssid",
"iface_mode",
"firmware_version",
"driver_version",
"channel",
"center_freq_mhz",
"channel_width_mhz",
"center1_mhz",
"chanspec",
}
)
def _run_capture(cmd: List[str], timeout: float = 12.0) -> str:
try:
p = subprocess.run(cmd, capture_output=True, text=True, timeout=timeout)
except (OSError, subprocess.TimeoutExpired):
return ""
if p.returncode != 0:
return ""
return p.stdout or ""
def _is_wireless_iface(net_name: str) -> bool:
base = os.path.join(SYS_NET, net_name)
if os.path.isdir(os.path.join(base, "wireless")):
return True
if os.path.isdir(os.path.join(base, "phy80211")):
return True
uevent = os.path.join(base, "device", "uevent")
if os.path.isfile(uevent):
try:
with open(uevent, encoding="utf-8") as f:
if "DEVTYPE=wlan" in f.read():
return True
except OSError:
pass
return False
def _read_link_basename(path: str) -> str:
try:
target = os.readlink(path)
except OSError:
return ""
return os.path.basename(os.path.normpath(target))
_BDF_DIR = re.compile(r"^([0-9a-f]{4}:[0-9a-f]{2}:[0-9a-f]{2}\.[0-7])$", re.I)
def _pci_endpoint_from_device(device_realpath: str) -> Tuple[str, str, str]:
cur = device_realpath
pci_addr, vendor_id, device_id = "", "", ""
for _ in range(14):
base = os.path.basename(cur)
if _BDF_DIR.match(base) and os.path.isfile(os.path.join(cur, "vendor")):
pci_addr = base.lower()
vendor_id, device_id = _read_pci_vendor_device(cur)
break
parent = os.path.dirname(cur)
if parent == cur:
break
cur = parent
return pci_addr, vendor_id, device_id
def _connection_type(realpath_device: str) -> str:
low = realpath_device.lower()
if re.search(r"/usb[0-9]*/", low) or "/usb/usb" in low:
return "USB"
if re.search(r"[0-9a-f]{4}:[0-9a-f]{2}:[0-9a-f]{2}\.[0-7]", low, re.I):
return "PCIe"
return "unknown"
def _read_pci_vendor_device(pci_dir: str) -> Tuple[str, str]:
vend, dev = "", ""
try:
with open(os.path.join(pci_dir, "vendor"), encoding="utf-8") as f:
vend = f.read().strip().lower()
if vend.startswith("0x"):
vend = vend[2:]
with open(os.path.join(pci_dir, "device"), encoding="utf-8") as f:
dev = f.read().strip().lower()
if dev.startswith("0x"):
dev = dev[2:]
except OSError:
pass
return vend, dev
def _usb_ids_from_uevent(device_dir: str) -> Tuple[str, str]:
ue = os.path.join(device_dir, "uevent")
vid, pid = "", ""
try:
with open(ue, encoding="utf-8") as f:
for ln in f:
if ln.startswith("PRODUCT="):
rest = ln.strip().split("=", 1)[1]
parts = rest.split("/")
if len(parts) >= 2:
vid = parts[0].lower()[-4:].rjust(4, "0")
pid = parts[1].lower()[-4:].rjust(4, "0")
break
except OSError:
pass
return vid, pid
def _lspci_bdf_key(segment: str) -> str:
s = segment.strip().lower()
if re.match(r"^[0-9a-f]{2}:[0-9a-f]{2}\.[0-7]$", s, re.I):
return "0000:" + s
return s
def _parse_lspci_nn_wireless(stdout: str) -> Dict[str, Dict[str, str]]:
by_pci: Dict[str, Dict[str, str]] = {}
pci_start = re.compile(
r"^([0-9a-f]{4}:[0-9a-f]{2}:[0-9a-f]{2}\.[0-7]|[0-9a-f]{2}:[0-9a-f]{2}\.[0-7])\s+(.+)$",
re.I,
)
tail_ids = re.compile(r"\[([0-9a-f]{4}):([0-9a-f]{4})\]\s*$", re.I)
for line in stdout.splitlines():
line = line.strip()
m = pci_start.match(line)
if not m:
continue
bdf_n, rest = _lspci_bdf_key(m.group(1)), m.group(2)
lowrest = rest.lower()
if "ethernet controller" in lowrest or "[0200]" in rest:
continue
if "[0280]" not in rest and "[0282]" not in rest:
if not any(
x in lowrest
for x in ("wireless", "wi-fi", "wifi", "802.11", "wlan")
):
continue
elif "ethernet" in lowrest and not any(
x in lowrest for x in ("wireless", "802.11", "wifi", "wi-fi", "wlan")
):
continue
vendor, device = "", ""
tm = tail_ids.search(rest)
if tm:
vendor, device = tm.group(1).lower(), tm.group(2).lower()
by_pci[bdf_n] = {
"product": rest,
"lspci_vendor_id": vendor,
"lspci_device_id": device,
}
return by_pci
def _normalize_pci(addr: str) -> str:
if not addr:
return ""
a = addr.strip().lower()
if not a.startswith("0000:") and re.match(
r"^[0-9a-f]{2}:[0-9a-f]{2}\.[0-7]$", a, re.I
):
a = "0000:" + a
return a
def _merge_lspci_into(
interfaces: Dict[str, WirelessInterface],
lspci_by_pci: Dict[str, Dict[str, str]],
sysfs_device_paths: Dict[str, str],
) -> None:
for name, wi in interfaces.items():
keys = [_normalize_pci(wi.pci_address)]
sd = sysfs_device_paths.get(name, "")
if sd:
ep, _, _ = _pci_endpoint_from_device(sd)
keys.append(_normalize_pci(ep))
for key in keys:
if not key or key not in lspci_by_pci:
continue
merge = lspci_by_pci[key]
if merge.get("product"):
wi.product = merge["product"]
if merge.get("lspci_vendor_id") and not wi.vendor_id:
wi.vendor_id = merge["lspci_vendor_id"]
if merge.get("lspci_device_id") and not wi.device_id:
wi.device_id = merge["lspci_device_id"]
break
def _iw_iface_phy_map() -> Dict[str, str]:
out = _run_capture(["iw", "dev"], timeout=8.0)
if not out.strip():
return {}
mapping: Dict[str, str] = {}
current_phy: Optional[str] = None
for raw in out.splitlines():
line = raw.rstrip()
if not line.startswith("\t"):
m = re.match(r"phy#(\d+)\b", line.strip())
if m:
current_phy = f"phy{m.group(1)}"
else:
current_phy = None
continue
inner = line.lstrip()
m = re.match(r"Interface\s+(\S+)", inner)
if m and current_phy:
mapping[m.group(1)] = current_phy
return mapping
def _bands_from_iw_phy(phyname: str) -> List[float]:
if not phyname:
return []
out = _run_capture(["iw", phyname, "info"], timeout=10.0)
bands: List[float] = []
for line in out.splitlines():
line = line.strip()
if line.startswith("Band "):
try:
n = int(line.split()[1])
except (ValueError, IndexError):
continue
if n == 1:
bands.append(2.4)
elif n == 2:
bands.append(5.0)
elif n == 3:
bands.append(6.0)
return sorted(set(bands))
def _attach_iw_to_interfaces(interfaces: Dict[str, WirelessInterface]) -> None:
iface_to_phy = _iw_iface_phy_map()
for name, wi in interfaces.items():
phy = iface_to_phy.get(name, "")
if not phy:
continue
wi.wiphy = phy
bs = _bands_from_iw_phy(phy)
if bs:
wi.bands_ghz = bs
def _read_sysfs_trim(path: str) -> str:
try:
with open(path, encoding="utf-8") as f:
return f.read().strip()
except OSError:
return ""
def _parse_iw_dev_info(text: str) -> Dict[str, Any]:
"""Parse `iw dev <if> info` for nl80211 type and current channel / width."""
r: Dict[str, Any] = {}
if not text.strip():
return r
for raw in text.splitlines():
line = raw.strip()
m = re.match(r"type\s+(\S+)", line, re.I)
if m:
t = m.group(1).lower()
if t == "ap":
r["iface_mode"] = "AP"
elif t == "managed":
r["iface_mode"] = "STA"
elif t == "monitor":
r["iface_mode"] = "monitor"
elif t == "mesh_point":
r["iface_mode"] = "mesh"
elif t == "P2P-client":
r["iface_mode"] = "P2P-client"
elif t == "P2P-GO":
r["iface_mode"] = "P2P-GO"
else:
r["iface_mode"] = m.group(1)
continue
m = re.search(r"channel\s+(\d+)\s+\((\d+)\s+MHz\)", line, re.I)
if m:
try:
r["channel"] = int(m.group(1))
r["center_freq_mhz"] = int(m.group(2))
except ValueError:
pass
m = re.search(r"width:\s*(\d+)\s*MHz", line, re.I)
if m:
try:
r["channel_width_mhz"] = int(m.group(1))
except ValueError:
pass
m = re.search(r"center1:\s*(\d+)\s*MHz", line, re.I)
if m:
try:
r["center1_mhz"] = int(m.group(1))
except ValueError:
pass
_fill_chanspec(r)
return r
def _fill_chanspec(r: Dict[str, Any]) -> None:
cf = r.get("center_freq_mhz")
w = r.get("channel_width_mhz")
c1 = r.get("center1_mhz")
if cf is not None and w is not None:
r["chanspec"] = f"{cf}@{w}MHz"
if c1 is not None:
r["chanspec"] += f" center1={c1}"
elif cf is not None:
r["chanspec"] = str(cf)
def _parse_iw_dev_link(text: str) -> Dict[str, Any]:
"""Parse `iw dev <if> link` (STA: BSSID, freq when associated)."""
r: Dict[str, Any] = {}
if not text.strip():
return r
for raw in text.splitlines():
line = raw.strip()
m = re.match(r"Connected to\s+([0-9a-f:]+)", line, re.I)
if m:
r["bssid"] = m.group(1).lower()
continue
m = re.match(r"freq:\s*(\d+)", line, re.I)
if m:
try:
r["center_freq_mhz"] = int(m.group(1))
except ValueError:
pass
_fill_chanspec(r)
return r
def _parse_ethtool_i(text: str) -> Dict[str, str]:
out: Dict[str, str] = {}
for raw in text.splitlines():
line = raw.strip()
if ":" not in line:
continue
key, _, val = line.partition(":")
key, val = key.strip().lower(), val.strip()
if key == "firmware-version":
out["firmware_version"] = val
elif key == "version":
out["driver_version"] = val
elif key == "driver":
out["ethtool_driver"] = val
elif key == "expansion-rom-version":
out["expansion_rom_version"] = val
return out
def _merge_runtime_dict(base: Dict[str, Any], extra: Dict[str, Any]) -> None:
for k, v in extra.items():
if v is None or v == "":
continue
if k not in base or base[k] in (None, "", 0):
base[k] = v
def _collect_iface_runtime(ifname: str) -> Dict[str, Any]:
"""Live sysfs + iw + ethtool; merged into WirelessInterface._runtime."""
r: Dict[str, Any] = {}
base = os.path.join(SYS_NET, ifname)
mac = _read_sysfs_trim(os.path.join(base, "address"))
if mac:
r["mac_address"] = mac.lower()
op = _read_sysfs_trim(os.path.join(base, "operstate")).lower()
if op:
r["operstate"] = op
r["is_up"] = op == "up"
car_raw = _read_sysfs_trim(os.path.join(base, "carrier"))
if car_raw in ("0", "1"):
r["carrier_up"] = car_raw == "1"
info_txt = _run_capture(["iw", "dev", ifname, "info"], timeout=10.0)
_merge_runtime_dict(r, _parse_iw_dev_info(info_txt))
link_txt = _run_capture(["iw", "dev", ifname, "link"], timeout=10.0)
link_d = _parse_iw_dev_link(link_txt)
if link_d.get("bssid"):
r["bssid"] = link_d["bssid"]
if "center_freq_mhz" in link_d:
r["center_freq_mhz"] = link_d["center_freq_mhz"]
_fill_chanspec(r)
eth_txt = _run_capture(["ethtool", "-i", ifname], timeout=8.0)
eth = _parse_ethtool_i(eth_txt)
if eth.get("firmware_version"):
r["firmware_version"] = eth["firmware_version"]
if eth.get("driver_version"):
r["driver_version"] = eth["driver_version"]
return r
def _coerce_runtime_json_types(rt: Dict[str, Any]) -> None:
for k in ("channel", "center_freq_mhz", "channel_width_mhz", "center1_mhz"):
if k not in rt:
continue
v = rt[k]
if isinstance(v, (int, float)):
rt[k] = int(v)
elif isinstance(v, str) and v.strip().isdigit():
rt[k] = int(v.strip())
if "is_up" in rt:
rt["is_up"] = bool(rt["is_up"])
if "carrier_up" in rt:
cv = rt["carrier_up"]
rt["carrier_up"] = None if cv is None else bool(cv)
@dataclass
class WirelessInterface:
"""One 802.11 netdev: static identity from sysfs/lspci; live link/chan/MAC via properties."""
name: str
driver: str = ""
pci_address: str = ""
connection_type: str = "unknown"
vendor_id: str = ""
device_id: str = ""
product: Optional[str] = None
chip_label: str = ""
wiphy: Optional[str] = None
bands_ghz: Optional[List[float]] = None
_runtime_loaded: bool = field(default=False, repr=False, compare=False)
_runtime: Dict[str, Any] = field(default_factory=dict, repr=False, compare=False)
def _ensure_runtime(self) -> None:
if self._runtime_loaded:
return
self._runtime = _collect_iface_runtime(self.name)
self._runtime_loaded = True
def refresh_runtime(self) -> None:
"""Re-query sysfs / iw / ethtool (next property read uses fresh data)."""
self._runtime_loaded = False
self._runtime.clear()
@property
def mac_address(self) -> str:
self._ensure_runtime()
return str(self._runtime.get("mac_address", ""))
@property
def operstate(self) -> str:
self._ensure_runtime()
return str(self._runtime.get("operstate", ""))
@property
def is_up(self) -> bool:
self._ensure_runtime()
return bool(self._runtime.get("is_up", False))
@property
def carrier_up(self) -> Optional[bool]:
self._ensure_runtime()
if "carrier_up" not in self._runtime:
return None
return bool(self._runtime["carrier_up"])
@property
def bssid(self) -> str:
self._ensure_runtime()
return str(self._runtime.get("bssid", ""))
@property
def iface_mode(self) -> str:
"""nl80211 interface type: AP, STA, monitor, mesh, … (empty if unknown)."""
self._ensure_runtime()
return str(self._runtime.get("iface_mode", ""))
@property
def is_ap(self) -> bool:
return self.iface_mode == "AP"
@property
def is_sta(self) -> bool:
return self.iface_mode == "STA"
@property
def firmware_version(self) -> str:
self._ensure_runtime()
return str(self._runtime.get("firmware_version", ""))
@property
def driver_version(self) -> str:
"""Driver build/version string from ethtool when available."""
self._ensure_runtime()
return str(self._runtime.get("driver_version", ""))
@property
def channel(self) -> Optional[int]:
self._ensure_runtime()
v = self._runtime.get("channel")
return int(v) if isinstance(v, int) else None
@property
def center_freq_mhz(self) -> Optional[int]:
self._ensure_runtime()
v = self._runtime.get("center_freq_mhz")
return int(v) if isinstance(v, int) else None
@property
def channel_width_mhz(self) -> Optional[int]:
self._ensure_runtime()
v = self._runtime.get("channel_width_mhz")
return int(v) if isinstance(v, int) else None
@property
def center1_mhz(self) -> Optional[int]:
self._ensure_runtime()
v = self._runtime.get("center1_mhz")
return int(v) if isinstance(v, int) else None
@property
def chanspec(self) -> str:
"""Human-readable channel summary (freq / width / center1) when known."""
self._ensure_runtime()
return str(self._runtime.get("chanspec", ""))
@classmethod
def from_sysfs(cls, net_name: str) -> Optional[WirelessInterface]:
base = os.path.join(SYS_NET, net_name)
device_ln = os.path.join(base, "device")
if not os.path.isdir(device_ln):
return None
real_dev = os.path.realpath(device_ln)
driver = _read_link_basename(os.path.join(device_ln, "driver"))
if not driver:
driver = _read_link_basename(os.path.join(real_dev, "driver"))
conn = _connection_type(real_dev)
pci_addr, vendor_id, device_id = _pci_endpoint_from_device(real_dev)
if conn == "USB" and (not vendor_id or not device_id):
uv, up = _usb_ids_from_uevent(device_ln)
vendor_id, device_id = uv or vendor_id, up or device_id
return cls(
name=net_name,
driver=driver or "",
pci_address=pci_addr or "",
connection_type=conn,
vendor_id=vendor_id,
device_id=device_id,
)
def refresh_chip_label(self) -> None:
if self.product:
self.chip_label = self.product[:220]
elif self.vendor_id and self.device_id:
self.chip_label = f"{self.vendor_id}:{self.device_id}"
else:
self.chip_label = (self.driver or self.name)[:220]
def primary_score(self) -> Tuple[int, str]:
score = 0
if self.connection_type == "PCIe":
score += 10
elif self.connection_type == "USB":
score += 3
if self.name == "wlan0":
score += 5
if self.driver:
score += 1
return score, self.name
def to_map_dict(self) -> Dict[str, Any]:
self._ensure_runtime()
d: Dict[str, Any] = {
"interface": self.name,
"driver": self.driver,
"pci_address": self.pci_address,
"connection_type": self.connection_type,
"vendor_id": self.vendor_id,
"device_id": self.device_id,
"chip_label": self.chip_label,
}
if self.product:
d["product"] = self.product
if self.wiphy:
d["wiphy"] = self.wiphy
if self.bands_ghz:
d["bands_ghz"] = list(self.bands_ghz)
for k in sorted(_RUNTIME_JSON_KEYS):
if k in self._runtime:
d[k] = self._runtime[k]
return d
@classmethod
def from_map_dict(cls, d: Dict[str, Any]) -> WirelessInterface:
rt: Dict[str, Any] = {}
static: Dict[str, Any] = {}
for k, v in d.items():
if k in _RUNTIME_JSON_KEYS:
rt[k] = v
else:
static[k] = v
_coerce_runtime_json_types(rt)
bands = static.get("bands_ghz")
if bands is not None and not isinstance(bands, list):
bands = None
wi = cls(
name=str(static.get("interface") or ""),
driver=str(static.get("driver") or ""),
pci_address=str(static.get("pci_address") or ""),
connection_type=str(static.get("connection_type") or "unknown"),
vendor_id=str(static.get("vendor_id") or ""),
device_id=str(static.get("device_id") or ""),
product=static.get("product") if isinstance(static.get("product"), str) else None,
chip_label=str(static.get("chip_label") or ""),
wiphy=static.get("wiphy") if isinstance(static.get("wiphy"), str) else None,
bands_ghz=[float(x) for x in bands] if bands else None,
)
wi._runtime = rt
wi._runtime_loaded = bool(rt)
return wi
@dataclass
class WirelessSnapshot:
"""Wireless layout on one host at one time; serializes to fiber_ports[].wlan."""
wlan_scanned_at: str
interfaces: Dict[str, WirelessInterface] = field(default_factory=dict)
_primary_from_map: Optional[WirelessInterface] = field(
default=None, compare=False, repr=False
)
@property
def primary(self) -> Optional[WirelessInterface]:
if self._primary_from_map is not None:
return self._primary_from_map
if not self.interfaces:
return None
return max(self.interfaces.values(), key=lambda w: w.primary_score())
def to_map_dict(self) -> Dict[str, Any]:
prim = self.primary
return {
"wlan_scanned_at": self.wlan_scanned_at,
"interfaces": {k: v.to_map_dict() for k, v in sorted(self.interfaces.items())},
"primary": prim.to_map_dict() if prim else None,
}
def chip_and_interface(self) -> Tuple[str, str]:
prim = self.primary
if prim is None:
return "", ""
chip = (prim.chip_label or prim.product or "").strip()
return chip, prim.name.strip()
@classmethod
def discover(cls) -> WirelessSnapshot:
scanned = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
if not os.path.isdir(SYS_NET):
return cls(wlan_scanned_at=scanned, interfaces={})
names = sorted(
n
for n in os.listdir(SYS_NET)
if n != "lo" and os.path.isdir(os.path.join(SYS_NET, n))
)
interfaces: Dict[str, WirelessInterface] = {}
sysfs_paths: Dict[str, str] = {}
for n in names:
if not _is_wireless_iface(n):
continue
wi = WirelessInterface.from_sysfs(n)
if wi is None:
continue
sysfs_paths[n] = os.path.realpath(os.path.join(SYS_NET, n, "device"))
interfaces[n] = wi
lspci_out = _run_capture(["lspci", "-nn"], timeout=15.0)
if lspci_out:
_merge_lspci_into(interfaces, _parse_lspci_nn_wireless(lspci_out), sysfs_paths)
_attach_iw_to_interfaces(interfaces)
for wi in interfaces.values():
wi.refresh_chip_label()
return cls(wlan_scanned_at=scanned, interfaces=interfaces)
@classmethod
def from_map_dict(cls, d: Dict[str, Any]) -> Optional[WirelessSnapshot]:
if not isinstance(d, dict):
return None
raw_if = d.get("interfaces")
if raw_if is None:
raw_if = {}
if not isinstance(raw_if, dict):
return None
interfaces: Dict[str, WirelessInterface] = {}
for _key, val in raw_if.items():
if not isinstance(val, dict):
continue
wi = WirelessInterface.from_map_dict(val)
if wi.name:
interfaces[wi.name] = wi
ts = d.get("wlan_scanned_at")
pr = d.get("primary")
primary_override: Optional[WirelessInterface] = None
if isinstance(pr, dict) and pr.get("interface"):
primary_override = WirelessInterface.from_map_dict(pr)
return cls(
wlan_scanned_at=str(ts) if isinstance(ts, str) else "",
interfaces=interfaces,
_primary_from_map=primary_override,
)
def discover_wireless_for_map() -> Dict[str, Any]:
"""JSON-serializable dict for fiber_map / wlan-info-json (stable schema)."""
return WirelessSnapshot.discover().to_map_dict()
def wlan_chip_and_interface(wlan_doc: Optional[Dict[str, Any]]) -> Tuple[str, str]:
"""(chip_label, interface_name) from a wlan snapshot dict, or ('','')."""
if not isinstance(wlan_doc, dict):
return "", ""
snap = WirelessSnapshot.from_map_dict(wlan_doc)
if snap is None:
return "", ""
return snap.chip_and_interface()

60
fiwi/patch_panel.py Normal file
View File

@ -0,0 +1,60 @@
"""
Physical patch panel: front-panel position count for the rack (field workflow).
Stored in fiber_map.json as ``patch_panel``: ``{ "slots": N, "label": "" }``.
USB hub calibrate still walks hub ports; map keys 1N align with panel positions.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Dict, Optional
from fiwi.constants import PANEL_SLOTS
_MAX_SLOTS = 256
@dataclass(frozen=True)
class PatchPanel:
"""Instantiated panel: ``slots`` front-panel positions (numbered 1…slots)."""
slots: int
label: str = ""
def __post_init__(self) -> None:
if self.slots < 1:
raise ValueError("patch panel slots must be >= 1")
if self.slots > _MAX_SLOTS:
raise ValueError(f"patch panel slots must be <= {_MAX_SLOTS}")
def to_map_blob(self) -> Dict[str, Any]:
d: Dict[str, Any] = {"slots": self.slots}
if self.label.strip():
d["label"] = self.label.strip()
return d
@classmethod
def from_map_blob(cls, blob: Any) -> Optional[PatchPanel]:
if not isinstance(blob, dict):
return None
s = blob.get("slots")
if isinstance(s, str) and s.strip().isdigit():
s = int(s.strip())
if not isinstance(s, int) or s < 1:
return None
if s > _MAX_SLOTS:
return None
lab = blob.get("label")
label = lab.strip() if isinstance(lab, str) else ""
return cls(slots=s, label=label)
def effective_panel_slots(doc: Optional[Dict[str, Any]]) -> int:
"""Panel position count from ``doc['patch_panel']``, else ``PANEL_SLOTS``."""
if not isinstance(doc, dict):
return PANEL_SLOTS
pp = PatchPanel.from_map_blob(doc.get("patch_panel"))
if pp is not None:
return pp.slots
return PANEL_SLOTS

22
fiwi/paths.py Normal file
View File

@ -0,0 +1,22 @@
"""Runtime directory for JSON maps and Fi-Wi SSH dotenv files (e.g. remote_ssh.env)."""
import os
from typing import Optional
_BASE: Optional[str] = None
def configure(app_dir: str) -> None:
"""Call once from ``fiwi.py`` with ``dirname(abspath(__file__))``."""
global _BASE
_BASE = os.path.abspath(app_dir)
def base_dir() -> str:
if _BASE is None:
raise RuntimeError("fiwi.paths.configure() was not called (run via fiwi.py)")
return _BASE
def fiber_map_path() -> str:
return os.path.join(base_dir(), "fiber_map.json")

View File

@ -3,9 +3,10 @@
import json import json
import sys import sys
from hubmgr.constants import PANEL_SLOTS from fiwi.patch_panel import effective_panel_slots
from hubmgr import fiber_map_io as fm from fiwi import fiber_map_io as fm
from hubmgr import remote_ssh as rs from fiwi.fiber_radio_port import FiberRadioPort
from fiwi.ssh_node import SshNode, apply_fiwi_ssh_env
def dispatch_fiber_mapped_ssh_if_needed(argv): def dispatch_fiber_mapped_ssh_if_needed(argv):
@ -13,7 +14,7 @@ def dispatch_fiber_mapped_ssh_if_needed(argv):
If the fiber map says this port is on another host (ssh / host+user), forward over SSH If the fiber map says this port is on another host (ssh / host+user), forward over SSH
without importing brainstem locally. Returns exit code, or None to continue normally. without importing brainstem locally. Returns exit code, or None to continue normally.
""" """
rs.apply_remote_ssh_env_file() apply_fiwi_ssh_env()
try: try:
doc = fm.load_fiber_map_document() doc = fm.load_fiber_map_document()
except (OSError, json.JSONDecodeError, ValueError): except (OSError, json.JSONDecodeError, ValueError):
@ -31,21 +32,27 @@ def dispatch_fiber_mapped_ssh_if_needed(argv):
fid = int(argv[3]) fid = int(argv[3])
except ValueError: except ValueError:
return None return None
entry = doc["fiber_ports"].get(str(fid)) node = FiberRadioPort.from_port_id(doc, fid).ssh_node()
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None if node:
if ssh: return node.invoke(
return rs.ssh_forward(ssh, ["power", "fiber-port", str(fid), argv[4].lower()]) ["power", "fiber-port", str(fid), argv[4].lower()],
defer=False,
)
if len(argv) >= 3 and argv[0].lower() == "fiber" and argv[1].lower() == "chip": if len(argv) >= 3 and argv[0].lower() == "fiber" and argv[1].lower() == "chip":
try: try:
fid = int(argv[2]) fid = int(argv[2])
except ValueError: except ValueError:
return None return None
entry = doc["fiber_ports"].get(str(fid)) node = FiberRadioPort.from_port_id(doc, fid).ssh_node()
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None if node:
if ssh: print(
extra = ["save"] if len(argv) >= 4 and argv[3].lower() == "save" else [] "fiber chip: not forwarded over SSH — radios are PCIe/fiber; use panel calibrate PCIe prompts "
return rs.ssh_forward(ssh, ["fiber", "chip", str(fid), *extra]) "or edit fiber_map.json (pcie / sfp_serial). Run fiber chip only for ports mapped locally on this machine.",
file=sys.stderr,
flush=True,
)
return 2
if len(argv) >= 3 and argv[0].lower() == "panel": if len(argv) >= 3 and argv[0].lower() == "panel":
sub = argv[1].lower() sub = argv[1].lower()
@ -54,11 +61,11 @@ def dispatch_fiber_mapped_ssh_if_needed(argv):
pn = int(argv[2]) pn = int(argv[2])
except ValueError: except ValueError:
return None return None
if pn < 1 or pn > PANEL_SLOTS: n_slots = effective_panel_slots(doc)
if pn < 1 or pn > n_slots:
return None return None
entry = doc["fiber_ports"].get(str(pn)) node = FiberRadioPort.from_port_id(doc, pn).ssh_node()
ssh = fm.fiber_ssh_target(entry) if isinstance(entry, dict) else None if node:
if ssh: return node.invoke(["panel", sub, str(pn)], defer=False)
return rs.ssh_forward(ssh, ["panel", sub, str(pn)])
return None return None

998
fiwi/ssh_node.py Normal file
View File

@ -0,0 +1,998 @@
"""
SSH transport for the Fi-Wi CLI on a remote host (``user@host``).
* :class:`SshNodeConfig` resolved ``FIWI_*`` values and optional ``remote_ssh.env`` /
``.fiwi_remote`` next to the install (call :meth:`SshNodeConfig.load` before runs).
* :class:`SshNode` sync calls either **run to completion** (default) or return a **deferred
handle** when ``defer=True`` / ``FIWI_REMOTE_DEFER`` / CLI ``--async``. Handles wrap a spawned
``ssh`` child process (no Python threads overlap comes from OS processes). Call
:meth:`RemoteCallHandle.result` (or the handles ``result``) to wait. Async methods mirror this
with :class:`asyncio.Task` when deferred.
Parsing helpers for Fi-Wi CLI tables live at module level where the harness needs them.
"""
from __future__ import annotations
import asyncio
import json
import os
import shlex
import subprocess
import sys
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Tuple, Union
from fiwi.paths import base_dir
_POPEN_TEXT_KW = {"text": True, "encoding": "utf-8", "errors": "replace"}
# Keys read from remote_ssh.env / .fiwi_remote (setdefault; real environment wins).
_SSH_ENV_FILE_KEYS = frozenset(
{
"FIWI_REMOTE_PYTHON",
"FIWI_REMOTE_SCRIPT",
"FIWI_SSH_BIN",
"FIWI_SSH_OPTS",
"FIWI_CALIBRATE_REMOTES",
"FIWI_REMOTE_DEFER",
}
)
async def _async_subprocess_communicate(
argv: List[str],
*,
timeout: float,
on_timeout: Tuple[int, str, str],
on_spawn_error: Tuple[int, str, str] = (1, "", ""),
catch_communicate_oserror: bool = False,
) -> Tuple[int, str, str]:
"""
Run ``argv`` with stdin closed and pipes; read full stdout/stderr via :meth:`asyncio.subprocess.Process.communicate`.
Use this when the caller only needs the final buffers (JSON, tables, short text). For **realtime** handling
(line parsing, progress, backpressure), use :meth:`asyncio.loop.subprocess_exec` with a custom
:class:`asyncio.SubprocessProtocol` and buffer or split lines inside :meth:`~asyncio.SubprocessProtocol.pipe_data_received`.
"""
try:
proc = await asyncio.create_subprocess_exec(
*argv,
stdin=asyncio.subprocess.DEVNULL,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
except OSError:
return on_spawn_error
try:
out_b, err_b = await asyncio.wait_for(proc.communicate(), timeout=timeout)
except asyncio.TimeoutExpired:
proc.kill()
try:
await proc.wait()
except OSError:
pass
return on_timeout
except OSError:
if not catch_communicate_oserror:
raise
proc.kill()
try:
await proc.wait()
except OSError:
pass
return on_timeout
code = proc.returncode if proc.returncode is not None else 1
return (
code,
(out_b or b"").decode(errors="replace"),
(err_b or b"").decode(errors="replace"),
)
class RemoteCallHandle:
"""
Deferred Fi-Wi **capture** call: ``ssh`` is already running as a child process.
Start several handles to overlap SSH sessions; :meth:`result` waits on this process only
(no Python threads).
"""
__slots__ = ("_default_timeout", "_finished", "_proc", "_triple")
def __init__(self, cmd: List[str], *, timeout: float) -> None:
self._default_timeout = timeout
self._finished = False
self._triple: Optional[Tuple[int, str, str]] = None
self._proc = subprocess.Popen(
cmd,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**_POPEN_TEXT_KW,
)
def result(self, timeout: Optional[float] = None) -> Tuple[int, str, str]:
"""``communicate`` with the child; return ``(exit_code, stdout, stderr)``."""
if self._finished:
assert self._triple is not None
return self._triple
t = self._default_timeout if timeout is None else timeout
try:
out, err = self._proc.communicate(timeout=t)
except subprocess.TimeoutExpired:
self._proc.kill()
try:
self._proc.communicate(timeout=10)
except (subprocess.TimeoutExpired, OSError):
pass
self._finished = True
self._triple = (124, "", "ssh/fiwi timed out")
return self._triple
code = self._proc.returncode if self._proc.returncode is not None else 1
self._finished = True
self._triple = (code, out or "", err or "")
return self._triple
class RemoteInvokeHandle:
"""Deferred interactive Fi-Wi run: child ``ssh`` uses this processs stdin."""
__slots__ = ("_proc",)
def __init__(self, cmd: List[str], *, log_line: str) -> None:
print(log_line, file=sys.stderr, flush=True)
self._proc = subprocess.Popen(cmd, stdin=sys.stdin)
def result(self, timeout: Optional[float] = None) -> int:
try:
code = self._proc.wait(timeout=timeout)
except subprocess.TimeoutExpired:
self._proc.kill()
self._proc.wait()
return 124
return code if code is not None else 1
class FetchCalibratePortsHandle:
"""Deferred :meth:`SshNode.fetch_calibrate_ports_json` — first SSH starts in ``__init__``."""
__slots__ = ("_finished", "_node", "_p1", "_value")
def __init__(self, node: "SshNode") -> None:
self._node = node
self._finished = False
self._value: Optional[List[Tuple[int, int]]] = None
cfg = SshNodeConfig.load()
cmd = node._fiwi_cmd_argv(cfg, ["calibrate-ports-json"])
self._p1 = subprocess.Popen(
cmd,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**_POPEN_TEXT_KW,
)
def result(self, timeout: Optional[float] = None) -> List[Tuple[int, int]]:
if self._finished:
assert self._value is not None
return self._value
t1 = 90.0 if timeout is None else timeout
try:
out, err = self._p1.communicate(timeout=t1)
except subprocess.TimeoutExpired:
self._p1.kill()
try:
self._p1.communicate(timeout=10)
except (subprocess.TimeoutExpired, OSError):
pass
self._finished = True
self._value = []
return []
code = self._p1.returncode if self._p1.returncode is not None else 1
out = out or ""
err = err or ""
json_pairs = None
if code == 0 and out.strip():
json_pairs = _pairs_from_calibrate_json_stdout(out)
if json_pairs is not None:
self._finished = True
self._value = json_pairs
return json_pairs
if "Unknown command" in (out + err):
print(
f"Remote {self._node.target!r} has no calibrate-ports-json; using discover fallback "
"(update fiwi on the remote host to skip the extra SSH round trip).",
file=sys.stderr,
flush=True,
)
cfg = SshNodeConfig.load()
cmd2 = self._node._fiwi_cmd_argv(cfg, ["discover"])
try:
p2 = subprocess.run(
cmd2,
capture_output=True,
timeout=120,
stdin=subprocess.DEVNULL,
**_POPEN_TEXT_KW,
)
except subprocess.TimeoutExpired:
self._finished = True
self._value = []
return []
except OSError:
self._finished = True
self._value = []
return []
code2 = p2.returncode if p2.returncode is not None else 1
out2 = (p2.stdout or "") if p2.stdout is not None else ""
err2 = (p2.stderr or "") if p2.stderr is not None else ""
if code2 != 0:
print(
f"discover on {self._node.target!r} failed (exit {code2}): {(err2 or out2).strip()[:400]}",
file=sys.stderr,
flush=True,
)
_maybe_hint_remote_python(code2, out2, err2)
if code != 0 or (out + err).strip():
print(
f"calibrate-ports-json on {self._node.target!r} (exit {code}): {(err or out).strip()[:400]}",
file=sys.stderr,
flush=True,
)
_maybe_hint_remote_python(code, out, err)
self._finished = True
self._value = []
return []
pairs = _parse_discover_stdout_for_calibrate_ports(out2)
if not pairs:
print(
f"Could not parse hub/port list from discover on {self._node.target!r}. "
"Ensure the host sees its hubs (udev 24ff) and discover prints the Hub|Serial|Ports table.",
file=sys.stderr,
flush=True,
)
self._finished = True
self._value = []
return []
print(
f"Using discover output for {self._node.target!r} ({len(pairs)} hub.port step(s)).",
file=sys.stderr,
flush=True,
)
self._finished = True
self._value = pairs
return pairs
class _HubPowerHandle:
__slots__ = ("_inner",)
def __init__(self, node: "SshNode", hub_1: int, port_0: int, enable: bool) -> None:
sub = "on" if enable else "off"
cfg = SshNodeConfig.load()
cmd = node._fiwi_cmd_argv(cfg, [sub, f"{hub_1}.{port_0}"])
self._inner = RemoteCallHandle(cmd, timeout=90.0)
def result(self, timeout: Optional[float] = None) -> Tuple[int, str]:
code, out, err = self._inner.result(timeout=timeout)
blob = "\n".join(x.strip() for x in (out or "", err or "") if x and x.strip()).strip()
return code, blob
class _StrFeedbackHandle:
__slots__ = ("_hub_1", "_inner", "_port_0")
def __init__(self, node: "SshNode", hub_1: int, port_0: int) -> None:
cfg = SshNodeConfig.load()
cmd = node._fiwi_cmd_argv(cfg, ["status", f"{hub_1}.{port_0}"])
self._inner = RemoteCallHandle(cmd, timeout=90.0)
self._hub_1 = hub_1
self._port_0 = port_0
def result(self, timeout: Optional[float] = None) -> str:
code, out, err = self._inner.result(timeout=timeout)
if code != 0:
return f"remote status failed ({code}): {(err or out).strip()[:120]}"
pwr, cur = parse_status_line_for_hub_port(out, self._hub_1, self._port_0)
return f"remote hub reports {pwr}, {cur} mA"
class _WlanJsonHandle:
__slots__ = ("_inner",)
def __init__(self, node: "SshNode", timeout: float) -> None:
cfg = SshNodeConfig.load()
cmd = node._fiwi_cmd_argv(cfg, ["wlan-info-json"])
self._inner = RemoteCallHandle(cmd, timeout=timeout)
def result(self, timeout: Optional[float] = None) -> Optional[Dict[str, Any]]:
code, out, err = self._inner.result(timeout=timeout)
if code != 0 or not (out or "").strip():
return None
try:
data = json.loads(out.strip())
except json.JSONDecodeError:
return None
return data if isinstance(data, dict) else None
class _LsusbLinesHandle:
"""First ``lsusb-lines-json`` SSH may still be running; optional second raw ``lsusb``."""
__slots__ = ("_inner", "_node")
def __init__(self, node: "SshNode") -> None:
self._node = node
cfg = SshNodeConfig.load()
cmd = node._fiwi_cmd_argv(cfg, ["lsusb-lines-json"])
self._inner = RemoteCallHandle(cmd, timeout=45.0)
def result(self, timeout: Optional[float] = None) -> List[str]:
code, out, err = self._inner.result(timeout=timeout)
if code == 0 and out.strip():
try:
data = json.loads(out.strip())
if isinstance(data, list) and all(isinstance(x, str) for x in data):
return data
except json.JSONDecodeError:
pass
cfg = SshNodeConfig.load()
cmd = [cfg.ssh_bin, *cfg.ssh_extra_argv, self._node.target, "lsusb"]
try:
p2 = subprocess.run(
cmd,
capture_output=True,
timeout=45,
stdin=subprocess.DEVNULL,
**_POPEN_TEXT_KW,
)
except (OSError, subprocess.TimeoutExpired):
return []
if p2.returncode != 0 or not p2.stdout:
return []
return p2.stdout.splitlines()
def resolve_remote_defer(defer: Optional[bool]) -> bool:
"""
Effective deferred / concurrent flag.
``defer`` explicit ``True``/``False`` wins; otherwise ``FIWI_REMOTE_DEFER`` from the
environment (including after :func:`apply_fiwi_ssh_env`) set by CLI ``--async`` or
``remote_ssh.env``.
"""
if defer is not None:
return defer
apply_fiwi_ssh_env()
v = os.environ.get("FIWI_REMOTE_DEFER", "").strip().lower()
return v in ("1", "true", "yes")
def apply_fiwi_ssh_env() -> None:
"""
Load ``remote_ssh.env`` or ``.fiwi_remote`` from the install directory into
``os.environ`` (``FIWI_*`` keys used for SSH transport only).
"""
b = base_dir()
for fname in ("remote_ssh.env", ".fiwi_remote"):
path = os.path.join(b, fname)
if not os.path.isfile(path):
continue
try:
with open(path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line or line.startswith("#"):
continue
if "=" not in line:
continue
key, _, val = line.partition("=")
key, val = key.strip(), val.strip()
if len(val) >= 2 and val[0] == val[-1] and val[0] in "'\"":
val = val[1:-1]
if key in _SSH_ENV_FILE_KEYS:
os.environ.setdefault(key, val)
except OSError:
continue
break
@dataclass(frozen=True)
class SshNodeConfig:
"""SSH client argv + remote ``python`` / ``fiwi.py`` paths after env + dotfiles."""
python: str
script: str
ssh_bin: str
ssh_extra_argv: tuple[str, ...]
calibrate_remotes: str
@classmethod
def load(cls) -> SshNodeConfig:
apply_fiwi_ssh_env()
raw_opts = os.environ.get("FIWI_SSH_OPTS") or ""
return cls(
python=os.environ.get("FIWI_REMOTE_PYTHON") or "python3",
script=os.environ.get("FIWI_REMOTE_SCRIPT") or "/usr/local/bin/fiwi.py",
ssh_bin=os.environ.get("FIWI_SSH_BIN") or "ssh",
ssh_extra_argv=tuple(shlex.split(raw_opts)),
calibrate_remotes=(os.environ.get("FIWI_CALIBRATE_REMOTES") or "").strip(),
)
@dataclass(frozen=True)
class SshNode:
"""One SSH destination for Fi-Wi (BrainStem hubs, calibration, etc.)."""
target: str
def __post_init__(self) -> None:
t = (self.target or "").strip()
if not t or "@" not in t:
raise ValueError("SshNode.target must be non-empty user@host")
object.__setattr__(self, "target", t)
@classmethod
def parse(cls, user_host: str) -> SshNode:
return cls(target=(user_host or "").strip())
def _invoke_blocking(self, remote_args: List[str]) -> int:
cfg = SshNodeConfig.load()
cmd = self._fiwi_cmd_argv_tty(cfg, remote_args)
print(
f"fiwi: ssh {self.target}{cfg.python} {cfg.script} {' '.join(remote_args)}",
file=sys.stderr,
flush=True,
)
proc = subprocess.run(cmd, stdin=sys.stdin)
return proc.returncode if proc.returncode is not None else 1
def invoke(
self, remote_args: List[str], *, defer: Optional[bool] = None
) -> Union[int, RemoteInvokeHandle]:
"""
TTY-capable Fi-Wi CLI (stdin forwarded).
Adds ``ssh -t`` for ``panel calibrate`` when ``FIWI_SSH_OPTS`` has no ``-t``.
When deferred, returns :class:`RemoteInvokeHandle`; child ``ssh`` already runs and shares
this processs stdin until :meth:`~RemoteInvokeHandle.result`.
"""
if resolve_remote_defer(defer):
cfg = SshNodeConfig.load()
cmd = self._fiwi_cmd_argv_tty(cfg, remote_args)
log = (
f"fiwi: ssh {self.target}{cfg.python} {cfg.script} {' '.join(remote_args)}"
)
return RemoteInvokeHandle(cmd, log_line=log)
return self._invoke_blocking(remote_args)
def _invoke_capture_blocking(
self, remote_args: List[str], timeout: float = 90
) -> Tuple[int, str, str]:
cfg = SshNodeConfig.load()
cmd = self._fiwi_cmd_argv(cfg, remote_args)
try:
proc = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=timeout,
stdin=subprocess.DEVNULL,
)
except subprocess.TimeoutExpired:
return 124, "", "ssh/fiwi timed out"
return (
proc.returncode if proc.returncode is not None else 1,
proc.stdout or "",
proc.stderr or "",
)
def invoke_capture(
self,
remote_args: List[str],
*,
timeout: float = 90,
defer: Optional[bool] = None,
) -> Union[Tuple[int, str, str], RemoteCallHandle]:
"""Fi-Wi CLI with captured stdout/stderr. No TTY."""
if resolve_remote_defer(defer):
cfg = SshNodeConfig.load()
cmd = self._fiwi_cmd_argv(cfg, remote_args)
return RemoteCallHandle(cmd, timeout=timeout)
return self._invoke_capture_blocking(remote_args, timeout=timeout)
def _fiwi_cmd_argv(self, cfg: SshNodeConfig, remote_args: List[str]) -> List[str]:
return [
cfg.ssh_bin,
*cfg.ssh_extra_argv,
self.target,
cfg.python,
cfg.script,
*remote_args,
]
def _fiwi_cmd_argv_tty(self, cfg: SshNodeConfig, remote_args: List[str]) -> List[str]:
extra = list(cfg.ssh_extra_argv)
if (
len(remote_args) >= 2
and remote_args[0].lower() == "panel"
and remote_args[1].lower() == "calibrate"
and not any(x in ("-t", "-tt") for x in extra)
):
extra = ["-t", *extra]
return [cfg.ssh_bin, *extra, self.target, cfg.python, cfg.script, *remote_args]
async def _ainvoke_coro(self, remote_args: List[str]) -> int:
cfg = SshNodeConfig.load()
cmd = self._fiwi_cmd_argv_tty(cfg, remote_args)
print(
f"fiwi: ssh {self.target}{cfg.python} {cfg.script} {' '.join(remote_args)}",
file=sys.stderr,
flush=True,
)
try:
proc = await asyncio.create_subprocess_exec(*cmd)
except OSError:
return 1
return await proc.wait()
async def ainvoke(
self, remote_args: List[str], *, defer: Optional[bool] = None
) -> Union[int, asyncio.Task[int]]:
"""
Async interactive Fi-Wi CLI (inherits stdin no ``asyncio.to_thread``).
When deferred, returns an :class:`asyncio.Task`.
"""
if resolve_remote_defer(defer):
return asyncio.create_task(self._ainvoke_coro(remote_args))
return await self._ainvoke_coro(remote_args)
async def _ainvoke_capture_coro(
self, remote_args: List[str], timeout: float = 90
) -> Tuple[int, str, str]:
cfg = SshNodeConfig.load()
cmd = self._fiwi_cmd_argv(cfg, remote_args)
return await _async_subprocess_communicate(
cmd,
timeout=timeout,
on_timeout=(124, "", "ssh/fiwi timed out"),
on_spawn_error=(1, "", ""),
)
async def ainvoke_capture(
self,
remote_args: List[str],
*,
timeout: float = 90,
defer: Optional[bool] = None,
) -> Union[Tuple[int, str, str], asyncio.Task[Tuple[int, str, str]]]:
"""Fi-Wi CLI with captured stdout/stderr; concurrent-safe when deferred (returns Task)."""
if resolve_remote_defer(defer):
return asyncio.create_task(
self._ainvoke_capture_coro(remote_args, timeout=timeout)
)
return await self._ainvoke_capture_coro(remote_args, timeout=timeout)
async def _araw_ssh_coro(
self, remote_argv: List[str], *, timeout: float = 45
) -> Tuple[int, str, str]:
cfg = SshNodeConfig.load()
cmd = [cfg.ssh_bin, *cfg.ssh_extra_argv, self.target, *remote_argv]
return await _async_subprocess_communicate(
cmd,
timeout=timeout,
on_timeout=(1, "", ""),
on_spawn_error=(1, "", ""),
catch_communicate_oserror=True,
)
async def araw_ssh(
self,
remote_argv: List[str],
*,
timeout: float = 45,
defer: Optional[bool] = None,
) -> Union[Tuple[int, str, str], asyncio.Task[Tuple[int, str, str]]]:
"""Async raw remote command (no ``fiwi.py`` wrapper)."""
if resolve_remote_defer(defer):
return asyncio.create_task(self._araw_ssh_coro(remote_argv, timeout=timeout))
return await self._araw_ssh_coro(remote_argv, timeout=timeout)
def _raw_ssh_blocking(
self, remote_argv: List[str], *, timeout: float = 45
) -> Tuple[int, str, str]:
cfg = SshNodeConfig.load()
cmd = [cfg.ssh_bin, *cfg.ssh_extra_argv, self.target, *remote_argv]
try:
proc = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=timeout,
stdin=subprocess.DEVNULL,
)
except (OSError, subprocess.TimeoutExpired):
return 1, "", ""
return (
proc.returncode if proc.returncode is not None else 1,
proc.stdout or "",
proc.stderr or "",
)
def raw_ssh(
self,
remote_argv: List[str],
*,
timeout: float = 45,
defer: Optional[bool] = None,
) -> Union[Tuple[int, str, str], RemoteCallHandle]:
"""Run ``remote_argv`` on the host (no ``python fiwi.py`` prefix)."""
if resolve_remote_defer(defer):
cfg = SshNodeConfig.load()
cmd = [cfg.ssh_bin, *cfg.ssh_extra_argv, self.target, *remote_argv]
return RemoteCallHandle(cmd, timeout=timeout)
return self._raw_ssh_blocking(remote_argv, timeout=timeout)
def _fetch_calibrate_ports_json_blocking(self) -> List[Tuple[int, int]]:
"""``[[hub, port], …]`` from remote ``calibrate-ports-json``, else parse ``discover``."""
code, out, err = self._invoke_capture_blocking(["calibrate-ports-json"], timeout=90)
out = out or ""
err = err or ""
json_pairs = None
if code == 0 and out.strip():
json_pairs = _pairs_from_calibrate_json_stdout(out)
if json_pairs is not None:
return json_pairs
if "Unknown command" in (out + err):
print(
f"Remote {self.target!r} has no calibrate-ports-json; using discover fallback "
"(update fiwi on the remote host to skip the extra SSH round trip).",
file=sys.stderr,
flush=True,
)
code2, out2, err2 = self._invoke_capture_blocking(["discover"], timeout=120)
out2 = out2 or ""
if code2 != 0:
print(
f"discover on {self.target!r} failed (exit {code2}): {(err2 or out2).strip()[:400]}",
file=sys.stderr,
flush=True,
)
_maybe_hint_remote_python(code2, out2, err2)
if code != 0 or (out + err).strip():
print(
f"calibrate-ports-json on {self.target!r} (exit {code}): {(err or out).strip()[:400]}",
file=sys.stderr,
flush=True,
)
_maybe_hint_remote_python(code, out, err)
return []
pairs = _parse_discover_stdout_for_calibrate_ports(out2)
if not pairs:
print(
f"Could not parse hub/port list from discover on {self.target!r}. "
"Ensure the host sees its hubs (udev 24ff) and discover prints the Hub|Serial|Ports table.",
file=sys.stderr,
flush=True,
)
return []
print(
f"Using discover output for {self.target!r} ({len(pairs)} hub.port step(s)).",
file=sys.stderr,
flush=True,
)
return pairs
def fetch_calibrate_ports_json(
self, *, defer: Optional[bool] = None
) -> Union[List[Tuple[int, int]], FetchCalibratePortsHandle]:
if resolve_remote_defer(defer):
return FetchCalibratePortsHandle(self)
return self._fetch_calibrate_ports_json_blocking()
async def _afetch_calibrate_ports_json_coro(self) -> List[Tuple[int, int]]:
code, out, err = await self._ainvoke_capture_coro(["calibrate-ports-json"], timeout=90)
out = out or ""
err = err or ""
json_pairs = None
if code == 0 and out.strip():
json_pairs = _pairs_from_calibrate_json_stdout(out)
if json_pairs is not None:
return json_pairs
if "Unknown command" in (out + err):
print(
f"Remote {self.target!r} has no calibrate-ports-json; using discover fallback "
"(update fiwi on the remote host to skip the extra SSH round trip).",
file=sys.stderr,
flush=True,
)
code2, out2, err2 = await self._ainvoke_capture_coro(["discover"], timeout=120)
out2 = out2 or ""
if code2 != 0:
print(
f"discover on {self.target!r} failed (exit {code2}): {(err2 or out2).strip()[:400]}",
file=sys.stderr,
flush=True,
)
_maybe_hint_remote_python(code2, out2, err2)
if code != 0 or (out + err).strip():
print(
f"calibrate-ports-json on {self.target!r} (exit {code}): {(err or out).strip()[:400]}",
file=sys.stderr,
flush=True,
)
_maybe_hint_remote_python(code, out, err)
return []
pairs = _parse_discover_stdout_for_calibrate_ports(out2)
if not pairs:
print(
f"Could not parse hub/port list from discover on {self.target!r}. "
"Ensure the host sees its hubs (udev 24ff) and discover prints the Hub|Serial|Ports table.",
file=sys.stderr,
flush=True,
)
return []
print(
f"Using discover output for {self.target!r} ({len(pairs)} hub.port step(s)).",
file=sys.stderr,
flush=True,
)
return pairs
async def afetch_calibrate_ports_json(
self, *, defer: Optional[bool] = None
) -> Union[List[Tuple[int, int]], asyncio.Task[List[Tuple[int, int]]]]:
"""Async :meth:`fetch_calibrate_ports_json`; use ``asyncio.gather`` on tasks when deferred."""
if resolve_remote_defer(defer):
return asyncio.create_task(self._afetch_calibrate_ports_json_coro())
return await self._afetch_calibrate_ports_json_coro()
def _remote_hub_port_power_blocking(
self, hub_1: int, port_0: int, enable: bool
) -> Tuple[int, str]:
sub = "on" if enable else "off"
code, out, err = self._invoke_capture_blocking([sub, f"{hub_1}.{port_0}"])
blob = "\n".join(x.strip() for x in (out or "", err or "") if x and x.strip()).strip()
return code, blob
def remote_hub_port_power(
self,
hub_1: int,
port_0: int,
enable: bool,
*,
defer: Optional[bool] = None,
) -> Union[Tuple[int, str], _HubPowerHandle]:
if resolve_remote_defer(defer):
return _HubPowerHandle(self, hub_1, port_0, enable)
return self._remote_hub_port_power_blocking(hub_1, port_0, enable)
async def _aremote_hub_port_power_coro(
self, hub_1: int, port_0: int, enable: bool
) -> Tuple[int, str]:
sub = "on" if enable else "off"
code, out, err = await self._ainvoke_capture_coro([sub, f"{hub_1}.{port_0}"])
blob = "\n".join(x.strip() for x in (out or "", err or "") if x and x.strip()).strip()
return code, blob
async def aremote_hub_port_power(
self,
hub_1: int,
port_0: int,
enable: bool,
*,
defer: Optional[bool] = None,
) -> Union[Tuple[int, str], asyncio.Task[Tuple[int, str]]]:
if resolve_remote_defer(defer):
return asyncio.create_task(
self._aremote_hub_port_power_coro(hub_1, port_0, enable)
)
return await self._aremote_hub_port_power_coro(hub_1, port_0, enable)
def _remote_port_power_feedback_blocking(self, hub_1: int, port_0: int) -> str:
code, out, err = self._invoke_capture_blocking(["status", f"{hub_1}.{port_0}"])
if code != 0:
return f"remote status failed ({code}): {(err or out).strip()[:120]}"
pwr, cur = parse_status_line_for_hub_port(out, hub_1, port_0)
return f"remote hub reports {pwr}, {cur} mA"
def remote_port_power_feedback(
self, hub_1: int, port_0: int, *, defer: Optional[bool] = None
) -> Union[str, _StrFeedbackHandle]:
if resolve_remote_defer(defer):
return _StrFeedbackHandle(self, hub_1, port_0)
return self._remote_port_power_feedback_blocking(hub_1, port_0)
async def _aremote_port_power_feedback_coro(self, hub_1: int, port_0: int) -> str:
code, out, err = await self._ainvoke_capture_coro(["status", f"{hub_1}.{port_0}"])
if code != 0:
return f"remote status failed ({code}): {(err or out).strip()[:120]}"
pwr, cur = parse_status_line_for_hub_port(out, hub_1, port_0)
return f"remote hub reports {pwr}, {cur} mA"
async def aremote_port_power_feedback(
self, hub_1: int, port_0: int, *, defer: Optional[bool] = None
) -> Union[str, asyncio.Task[str]]:
if resolve_remote_defer(defer):
return asyncio.create_task(
self._aremote_port_power_feedback_coro(hub_1, port_0)
)
return await self._aremote_port_power_feedback_coro(hub_1, port_0)
def _remote_wlan_info_json_blocking(self, timeout: float = 60) -> Optional[Dict[str, Any]]:
code, out, err = self._invoke_capture_blocking(
["wlan-info-json"], timeout=timeout
)
if code != 0 or not (out or "").strip():
return None
try:
data = json.loads(out.strip())
except json.JSONDecodeError:
return None
return data if isinstance(data, dict) else None
def remote_wlan_info_json(
self, timeout: float = 60, *, defer: Optional[bool] = None
) -> Union[Optional[Dict[str, Any]], _WlanJsonHandle]:
if resolve_remote_defer(defer):
return _WlanJsonHandle(self, timeout)
return self._remote_wlan_info_json_blocking(timeout=timeout)
async def _aremote_wlan_info_json_coro(self, timeout: float = 60) -> Optional[Dict[str, Any]]:
code, out, err = await self._ainvoke_capture_coro(
["wlan-info-json"], timeout=timeout
)
if code != 0 or not (out or "").strip():
return None
try:
data = json.loads(out.strip())
except json.JSONDecodeError:
return None
return data if isinstance(data, dict) else None
async def aremote_wlan_info_json(
self, timeout: float = 60, *, defer: Optional[bool] = None
) -> Union[Optional[Dict[str, Any]], asyncio.Task[Optional[Dict[str, Any]]]]:
if resolve_remote_defer(defer):
return asyncio.create_task(self._aremote_wlan_info_json_coro(timeout=timeout))
return await self._aremote_wlan_info_json_coro(timeout=timeout)
def _remote_lsusb_lines_blocking(self) -> List[str]:
code, out, err = self._invoke_capture_blocking(["lsusb-lines-json"], timeout=45)
if code == 0 and out.strip():
try:
data = json.loads(out.strip())
if isinstance(data, list) and all(isinstance(x, str) for x in data):
return data
except json.JSONDecodeError:
pass
code2, out2, err2 = self._raw_ssh_blocking(["lsusb"], timeout=45)
if code2 != 0 or not out2:
return []
return out2.splitlines()
def remote_lsusb_lines(
self, *, defer: Optional[bool] = None
) -> Union[List[str], _LsusbLinesHandle]:
if resolve_remote_defer(defer):
return _LsusbLinesHandle(self)
return self._remote_lsusb_lines_blocking()
async def _aremote_lsusb_lines_coro(self) -> List[str]:
code, out, err = await self._ainvoke_capture_coro(["lsusb-lines-json"], timeout=45)
if code == 0 and out.strip():
try:
data = json.loads(out.strip())
if isinstance(data, list) and all(isinstance(x, str) for x in data):
return data
except json.JSONDecodeError:
pass
code2, out2, err2 = await self._araw_ssh_coro(["lsusb"], timeout=45)
if code2 != 0 or not out2:
return []
return out2.splitlines()
async def aremote_lsusb_lines(
self, *, defer: Optional[bool] = None
) -> Union[List[str], asyncio.Task[List[str]]]:
if resolve_remote_defer(defer):
return asyncio.create_task(self._aremote_lsusb_lines_coro())
return await self._aremote_lsusb_lines_coro()
def _pairs_from_calibrate_json_stdout(out: str) -> Optional[List[Tuple[int, int]]]:
try:
data = json.loads(out.strip())
except (json.JSONDecodeError, AttributeError):
return None
if not isinstance(data, list):
return None
pairs: List[Tuple[int, int]] = []
for item in data:
if isinstance(item, (list, tuple)) and len(item) == 2:
try:
pairs.append((int(item[0]), int(item[1])))
except (TypeError, ValueError):
continue
return pairs
def _maybe_hint_remote_python(exit_code: int, out: str, err: str) -> None:
"""Exit 126/127 often means bad ``FIWI_REMOTE_PYTHON`` on the remote host."""
cfg = SshNodeConfig.load()
if exit_code not in (126, 127):
return
blob = f"{out or ''} {err or ''}".lower()
if "no such file" not in blob and "not found" not in blob:
return
print(
" SSH ran this on the remote host (paths must exist *there*, not on your workstation):\n"
f" interpreter: {cfg.python}\n"
f" script: {cfg.script}\n"
" Fix: SSH in, activate the venv with brainstem, run:\n"
" which python3\n"
" realpath /path/to/fiwi.py\n"
" Put those absolute paths in remote_ssh.env next to fiwi.py, or export:\n"
" FIWI_REMOTE_PYTHON=… FIWI_REMOTE_SCRIPT=…",
file=sys.stderr,
flush=True,
)
def _parse_discover_stdout_for_calibrate_ports(stdout: str) -> List[Tuple[int, int]]:
"""Parse ``discover`` table lines → ``(hub, port0)`` pairs."""
pairs = []
for line in stdout.splitlines():
raw = line.rstrip()
if "|" not in raw:
continue
if set(raw.strip()) <= {"-", " "}:
continue
parts = [p.strip() for p in raw.split("|")]
if len(parts) < 3:
continue
hub_s, _, n_s = parts[0], parts[1], parts[2]
if not hub_s.isdigit():
continue
try:
hub = int(hub_s)
nports = int(n_s)
except ValueError:
continue
if hub < 1 or nports < 1:
continue
for p in range(nports):
pairs.append((hub, p))
return pairs
def parse_status_line_for_hub_port(stdout: str, hub_1: int, port_0: int) -> Tuple[str, str]:
"""Parse ``status hub.port`` table output for one port; return ``(power, current_str)``."""
for line in stdout.splitlines():
ln = line.strip()
if not ln or ln.startswith("-") or "Identity" in ln or "Hub" not in ln:
continue
parts = [p.strip() for p in line.split("|")]
if len(parts) < 4:
continue
hub_part = parts[0].replace("Hub", "", 1).strip()
try:
h = int(hub_part)
p = int(parts[1])
except ValueError:
continue
if h == hub_1 and p == port_0:
return parts[2], parts[3]
return "?", "?"

View File

@ -1,3 +1,8 @@
"""Local USB listing (``lsusb``) — sync and :mod:`asyncio` variants."""
from __future__ import annotations
import asyncio
import shutil import shutil
import subprocess import subprocess
@ -17,6 +22,30 @@ def lsusb_lines():
return proc.stdout.splitlines() return proc.stdout.splitlines()
async def alsusb_lines() -> list[str]:
"""Async :func:`lsusb_lines` for use under ``asyncio.gather`` with SSH probes."""
lsusb_bin = shutil.which("lsusb")
if not lsusb_bin:
return []
proc = None
try:
proc = await asyncio.create_subprocess_exec(
lsusb_bin,
stdin=asyncio.subprocess.DEVNULL,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.DEVNULL,
)
out_b, _ = await asyncio.wait_for(proc.communicate(), timeout=15)
except (OSError, asyncio.TimeoutExpired):
if proc is not None and proc.returncode is None:
proc.kill()
await proc.wait()
return []
if proc is None or proc.returncode != 0 or not out_b:
return []
return out_b.decode(errors="replace").splitlines()
def lsusb_new_devices(before_lines, after_lines): def lsusb_new_devices(before_lines, after_lines):
"""Lines present in after but not before, excluding Acroname hub vendor lines.""" """Lines present in after but not before, excluding Acroname hub vendor lines."""
before = set(before_lines) before = set(before_lines)
@ -50,3 +79,8 @@ def lsusb_acroname_lines():
] ]
except (OSError, subprocess.TimeoutExpired): except (OSError, subprocess.TimeoutExpired):
return [] return []
async def alsusb_acroname_lines() -> list[str]:
lines = await alsusb_lines()
return [ln for ln in lines if "24ff:" in ln.lower() or " acroname" in ln.lower()]

12
fiwi_env.sh Executable file
View File

@ -0,0 +1,12 @@
#!/usr/bin/env bash
# Optional dev helpers — from repo root: source ./fiwi_env.sh
FIWI_REPO="$(cd "$(dirname "${BASH_SOURCE[0]:-$0}")" && pwd)"
export FIWI_CLI="$FIWI_REPO/fiwi.py"
alias fiwi-discover='$FIWI_CLI discover'
alias fiwi-status='$FIWI_CLI status all'
alias fiwi-on='$FIWI_CLI on'
alias fiwi-off='$FIWI_CLI off'
alias fiwi-verify='$FIWI_CLI verify'
alias fiwi-setup='$FIWI_CLI setup'
alias fiwi-reboot='$FIWI_CLI reboot'
alias fiwi-reboot-force='$FIWI_CLI reboot-force'

View File

@ -1 +0,0 @@
"""Hub manager: Acroname USB hubs, fiber_map.json routing, optional SSH to remote hosts."""

View File

@ -1,208 +0,0 @@
"""Command-line entry: argv dispatch, --ssh, fiber-map SSH forwarding."""
import json
import os
import sys
from hubmgr.acroname import AcronameManager
from hubmgr.brainstem_loader import load_brainstem
from hubmgr.ssh_dispatch import dispatch_fiber_mapped_ssh_if_needed
from hubmgr import remote_ssh as rs
from hubmgr import usb_probe as usb
def main() -> int:
argv = sys.argv[1:]
if len(argv) >= 2 and argv[0] in ("--ssh", "--remote"):
remote_host = argv[1]
rest = argv[2:]
if not rest:
print(
"Usage: hub_manager.py --ssh user@host <command> [args...]\n"
" Example: hub_manager.py --ssh pi@192.168.1.39 discover\n"
" If brainstem is in a Pi venv: copy remote_ssh.env.example → remote_ssh.env next to\n"
" this script on the PC where you run --ssh (paths in the file are on the Pi).\n"
" Or export HUB_MANAGER_REMOTE_PYTHON / HUB_MANAGER_REMOTE_SCRIPT.\n"
" On the Pi: pip install -r requirements.txt in that venv; udev 24ff.",
file=sys.stderr,
flush=True,
)
return 2
return rs.ssh_forward(remote_host, rest)
rc_ssh_map = dispatch_fiber_mapped_ssh_if_needed(argv)
if rc_ssh_map is not None:
return rc_ssh_map
os.write(2, b"hub_manager: start\n")
try:
load_brainstem()
except Exception as exc:
print(f"hub_manager: failed to import brainstem: {exc}", file=sys.stderr, flush=True)
if isinstance(exc, ImportError):
print(
" If this text came from `hub_manager.py --ssh …`: the remote used system python3 by default.\n"
" On your PC export HUB_MANAGER_REMOTE_PYTHON to the Pi venvs python and\n"
" HUB_MANAGER_REMOTE_SCRIPT to that hub_manager.py (absolute paths on the Pi).",
file=sys.stderr,
flush=True,
)
return 1
mgr = AcronameManager()
try:
cmd = sys.argv[1].lower() if len(sys.argv) > 1 else "status"
target = sys.argv[2] if len(sys.argv) > 2 else "all"
if cmd == "status":
mgr.status(target)
elif cmd == "calibrate-ports-json":
if not mgr.hubs and not mgr.connect():
print("[]", flush=True)
else:
pairs = mgr._ordered_downstream_ports()
print(json.dumps([[h, p] for h, p in pairs]), flush=True)
elif cmd == "lsusb-lines-json":
print(json.dumps(usb.lsusb_lines()), flush=True)
elif cmd == "discover":
mgr.discover()
elif cmd == "power":
if len(sys.argv) < 5 or sys.argv[2].lower() != "fiber-port":
print(
"Usage: hub_manager.py power fiber-port <fiber_port_id> on|off\n"
" Uses fiber_map.json; per-entry ssh / host+user forwards to that host (see help).",
file=sys.stderr,
flush=True,
)
return 2
try:
fp_n = int(sys.argv[3])
except ValueError:
print("fiber_port_id must be an integer.", file=sys.stderr, flush=True)
return 2
mode = sys.argv[4].lower()
if mode not in ("on", "off"):
print("Last argument must be on or off.", file=sys.stderr, flush=True)
return 2
mgr.fiber_power(mode, fp_n)
elif cmd == "fiber":
if len(sys.argv) < 3:
print(
"Usage: hub_manager.py fiber status\n"
" hub_manager.py fiber chip <fiber_port_id> [save]\n"
" status — hub.port, Route, power, and saved chip preview from fiber_map.json\n"
" chip — lsusb diff on that USB port; add save to store usb_id / chip_type in the map",
file=sys.stderr,
flush=True,
)
return 2
sub = sys.argv[2].lower()
if sub == "status":
mgr.fiber_map_status()
elif sub == "chip":
if len(sys.argv) < 4:
print(
"Usage: hub_manager.py fiber chip <fiber_port_id> [save]",
file=sys.stderr,
flush=True,
)
return 2
try:
chip_fp = int(sys.argv[3])
except ValueError:
print("fiber_port_id must be an integer.", file=sys.stderr, flush=True)
return 2
save_chip = len(sys.argv) >= 5 and sys.argv[4].lower() == "save"
mgr.fiber_chip(chip_fp, save=save_chip)
else:
print(f"Unknown fiber subcommand: {sub!r}", file=sys.stderr, flush=True)
return 2
elif cmd == "panel":
if len(sys.argv) < 3:
print(
"Usage: hub_manager.py panel status\n"
" hub_manager.py panel on|off <panel_port>\n"
" hub_manager.py panel reboot|reboot-force <panel_port>\n"
" hub_manager.py panel calibrate [merge] [<N>] [--ssh user@host] …\n"
" calibrate: local hub ports first, then each --ssh host, calibrate_remotes in JSON, and/or\n"
" HUB_MANAGER_CALIBRATE_REMOTES in remote_ssh.env (comma-separated) for one-command hybrid.\n"
" merge / N as before; remote steps set \"ssh\" on new fiber_ports entries.\n"
" <panel_port> is 124; use power fiber-port for arbitrary ids.\n"
" Preset: fiber_map.rpi20.json → fiber_map.json for 8+8+4 → fiber ports 120.",
file=sys.stderr,
flush=True,
)
return 2
sub = sys.argv[2].lower()
if sub == "status":
mgr.panel_status()
elif sub == "calibrate":
args = sys.argv[3:]
merge, limit, cal_hosts = rs.parse_panel_calibrate_argv(args)
mgr.panel_calibrate(merge=merge, limit=limit, calibrate_ssh_hosts=cal_hosts)
elif sub in ("on", "off"):
if len(sys.argv) < 4:
print(f"Usage: hub_manager.py panel {sub} <1-24>", file=sys.stderr, flush=True)
return 2
mgr.panel_power(sub, int(sys.argv[3]))
elif sub == "reboot":
if len(sys.argv) < 4:
print("Usage: hub_manager.py panel reboot <1-24>", file=sys.stderr, flush=True)
return 2
mgr.panel_reboot(int(sys.argv[3]), skip_empty=True)
elif sub == "reboot-force":
if len(sys.argv) < 4:
print("Usage: hub_manager.py panel reboot-force <1-24>", file=sys.stderr, flush=True)
return 2
mgr.panel_reboot(int(sys.argv[3]), skip_empty=False)
else:
print(f"Unknown panel subcommand: {sub!r}", file=sys.stderr, flush=True)
return 2
elif cmd in ("on", "off"):
if not mgr.power(cmd, target):
return 1
elif cmd in ("reboot", "reboot-force"):
mgr.reboot(target, skip_empty=(cmd == "reboot"))
elif cmd == "setup":
mgr.setup_udev()
elif cmd == "verify":
mgr.verify()
elif cmd in ("help", "-h", "--help"):
print(
"Usage: hub_manager.py <command> [target]\n"
" discover — list hubs (serial, port count); no port I/O\n"
" status [target] — default command; target like all, 1.3, all.2\n"
" fiber status — fiber_ports + power (local or per-entry ssh / host+user)\n"
" fiber chip <id> [save] — lsusb probe; save stores usb_id / chip_type in fiber_map.json\n"
" power fiber-port <id> on|off — power by fiber key (ssh forward if map says so)\n"
" panel status — rack positions 124 (fiber ids 124 in fiber_map.json)\n"
" panel calibrate [merge] [N] [--ssh user@host]… — hybrid local + ssh hubs → one fiber_map.json\n"
" panel on|off|reboot|reboot-force <n>\n"
" on|off [target] reboot|reboot-force [target] setup verify\n"
"\n"
"Remote (hubs on another host — no local brainstem needed):\n"
" hub_manager.py --ssh user@host discover\n"
" remote_ssh.env next to hub_manager.py (see remote_ssh.env.example) or env vars:\n"
" HUB_MANAGER_REMOTE_PYTHON remote interpreter (default python3)\n"
" HUB_MANAGER_REMOTE_SCRIPT remote script path (default /usr/local/bin/hub_manager.py)\n"
" HUB_MANAGER_SSH_OPTS e.g. '-o BatchMode=yes'\n"
" HUB_MANAGER_CALIBRATE_REMOTES optional comma-separated user@host for panel calibrate (no --ssh needed)\n"
" Pi: pip install -r requirements.txt in the venv you point REMOTE_PYTHON at; udev 24ff.\n"
"\n"
"fiber_map.json fiber_ports entries may set ssh routing (hubs on another machine):\n"
' "ssh": "user@host" or "remote": "" or "host": "ip", "user": "pi"\n'
" On the SSH destination, the same fiber id should be local (omit ssh) so commands are not re-forwarded.\n"
' Optional "pcie": { bus, switch, slot, adapter_port, sfp_serial, board_serial, … } — calibrate can fill via 16+SFP.\n'
"\n"
"Hybrid calibrate: put {\"calibrate_remotes\": [\"pi@ip\"]} in fiber_map.json or pass --ssh per host;\n"
" order is all local downstream ports, then each remotes ports (see calibrate-ports-json on the Pi)."
)
else:
print(f"Unknown command: {cmd!r}", file=sys.stderr, flush=True)
print(
"Try: --ssh user@host … | discover | calibrate-ports-json | status | fiber | power | panel | … | help",
file=sys.stderr,
flush=True,
)
return 2
finally:
mgr.disconnect()
return 0

View File

@ -1,26 +0,0 @@
"""Runtime directory for JSON maps and remote_ssh.env (the hub_manager.py install location)."""
import os
from typing import Optional
_BASE: Optional[str] = None
def configure(app_dir: str) -> None:
"""Call once from hub_manager.py with dirname(abspath(__file__))."""
global _BASE
_BASE = os.path.abspath(app_dir)
def base_dir() -> str:
if _BASE is None:
raise RuntimeError("hubmgr.paths.configure() was not called (run via hub_manager.py)")
return _BASE
def panel_map_path() -> str:
return os.path.join(base_dir(), "panel_map.json")
def fiber_map_path() -> str:
return os.path.join(base_dir(), "fiber_map.json")

View File

@ -1,313 +0,0 @@
"""SSH forwarding to remote hub_manager; remote lsusb/status helpers for calibrate."""
import json
import os
import shlex
import subprocess
import sys
from hubmgr.paths import base_dir
REMOTE_SSH_ENV_KEYS = frozenset(
{
"HUB_MANAGER_REMOTE_PYTHON",
"HUB_MANAGER_REMOTE_SCRIPT",
"HUB_MANAGER_SSH_BIN",
"HUB_MANAGER_SSH_OPTS",
# Comma-separated user@host; panel calibrate adds these without repeating --ssh on CLI.
"HUB_MANAGER_CALIBRATE_REMOTES",
}
)
def apply_remote_ssh_env_file():
"""
Load remote_ssh.env (or .hub_manager_remote) from the hub_manager install directory.
Uses os.environ.setdefault so real environment variables still win.
"""
b = base_dir()
for fname in ("remote_ssh.env", ".hub_manager_remote"):
path = os.path.join(b, fname)
if not os.path.isfile(path):
continue
try:
with open(path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line or line.startswith("#"):
continue
if "=" not in line:
continue
key, _, val = line.partition("=")
key, val = key.strip(), val.strip()
if len(val) >= 2 and val[0] == val[-1] and val[0] in "'\"":
val = val[1:-1]
if key in REMOTE_SSH_ENV_KEYS:
os.environ.setdefault(key, val)
except OSError:
continue
break
def ssh_forward(remote_host, remote_args):
"""
Run hub_manager on a remote machine (e.g. Raspberry Pi with USB hubs attached).
Does not import brainstem locally only needs OpenSSH client.
"""
apply_remote_ssh_env_file()
py = os.environ.get("HUB_MANAGER_REMOTE_PYTHON", "python3")
script = os.environ.get("HUB_MANAGER_REMOTE_SCRIPT", "/usr/local/bin/hub_manager.py")
ssh_bin = os.environ.get("HUB_MANAGER_SSH_BIN", "ssh")
extra = shlex.split(os.environ.get("HUB_MANAGER_SSH_OPTS", ""))
if (
len(remote_args) >= 2
and remote_args[0].lower() == "panel"
and remote_args[1].lower() == "calibrate"
and not any(x in ("-t", "-tt") for x in extra)
):
extra = ["-t", *extra]
cmd = [ssh_bin, *extra, remote_host, py, script, *remote_args]
print(f"hub_manager: ssh {remote_host}{py} {script} {' '.join(remote_args)}", file=sys.stderr, flush=True)
proc = subprocess.run(cmd, stdin=sys.stdin)
return proc.returncode if proc.returncode is not None else 1
def ssh_forward_capture(remote_host, remote_args, timeout=90):
"""Run hub_manager on remote; return (exit_code, stdout, stderr). No TTY."""
apply_remote_ssh_env_file()
py = os.environ.get("HUB_MANAGER_REMOTE_PYTHON", "python3")
script = os.environ.get("HUB_MANAGER_REMOTE_SCRIPT", "/usr/local/bin/hub_manager.py")
ssh_bin = os.environ.get("HUB_MANAGER_SSH_BIN", "ssh")
extra = shlex.split(os.environ.get("HUB_MANAGER_SSH_OPTS", ""))
cmd = [ssh_bin, *extra, remote_host, py, script, *remote_args]
try:
proc = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=timeout,
stdin=subprocess.DEVNULL,
)
except subprocess.TimeoutExpired:
return 124, "", "ssh/hub_manager timed out"
return (
proc.returncode if proc.returncode is not None else 1,
proc.stdout or "",
proc.stderr or "",
)
def parse_panel_calibrate_argv(args):
"""
panel calibrate [merge] [N] [--ssh user@host] ...
Returns (merge, limit, calibrate_ssh_hosts).
"""
merge = False
limit = None
hosts = []
i = 0
while i < len(args):
a = args[i]
low = a.lower()
if low == "merge":
merge = True
i += 1
continue
if low == "--ssh":
if i + 1 >= len(args):
print("panel calibrate: --ssh requires user@host", file=sys.stderr, flush=True)
sys.exit(2)
hosts.append(args[i + 1].strip())
i += 2
continue
if a.isdigit():
limit = int(a)
i += 1
continue
print(f"panel calibrate: unknown argument {a!r}", file=sys.stderr, flush=True)
sys.exit(2)
return merge, limit, hosts
def parse_discover_stdout_for_calibrate_ports(stdout):
"""
Parse `discover` table lines: '1 | 0x........ | 8' (1,0)(1,7).
"""
pairs = []
for line in stdout.splitlines():
raw = line.rstrip()
if "|" not in raw:
continue
if set(raw.strip()) <= {"-", " "}:
continue
parts = [p.strip() for p in raw.split("|")]
if len(parts) < 3:
continue
hub_s, _, n_s = parts[0], parts[1], parts[2]
if not hub_s.isdigit():
continue
try:
hub = int(hub_s)
nports = int(n_s)
except ValueError:
continue
if hub < 1 or nports < 1:
continue
for p in range(nports):
pairs.append((hub, p))
return pairs
def maybe_hint_remote_ssh_python(exit_code, out, err):
"""Exit 126/127 usually means bad HUB_MANAGER_REMOTE_PYTHON path on the *remote* host."""
if exit_code not in (126, 127):
return
blob = f"{out or ''} {err or ''}".lower()
if "no such file" not in blob and "not found" not in blob:
return
py = os.environ.get("HUB_MANAGER_REMOTE_PYTHON", "python3")
script = os.environ.get("HUB_MANAGER_REMOTE_SCRIPT", "/usr/local/bin/hub_manager.py")
print(
" SSH ran this on the Pi (paths must exist *on the Pi*, not on Fedora):\n"
f" interpreter: {py}\n"
f" script: {script}\n"
" Fix: SSH to the Pi, activate the venv with brainstem, run:\n"
" which python3\n"
" realpath /path/to/hub_manager.py\n"
" Put those absolute paths in remote_ssh.env next to hub_manager.py on Fedora, or export:\n"
" HUB_MANAGER_REMOTE_PYTHON=… HUB_MANAGER_REMOTE_SCRIPT=…",
file=sys.stderr,
flush=True,
)
def fetch_calibrate_ports_json(ssh_host):
"""Ask remote hub_manager for [[hub,port], ...]; fall back to parsing `discover` if remote script is older."""
code, out, err = ssh_forward_capture(ssh_host, ["calibrate-ports-json"], timeout=90)
out = out or ""
err = err or ""
json_pairs = None
if code == 0 and out.strip():
try:
data = json.loads(out.strip())
except json.JSONDecodeError:
json_pairs = None
else:
if isinstance(data, list):
json_pairs = []
for item in data:
if isinstance(item, (list, tuple)) and len(item) == 2:
try:
json_pairs.append((int(item[0]), int(item[1])))
except (TypeError, ValueError):
continue
else:
json_pairs = None
if json_pairs is not None:
return json_pairs
if "Unknown command" in (out + err):
print(
f"Remote {ssh_host!r} has no calibrate-ports-json; using discover fallback "
"(scp this hub_manager.py to the Pi to skip the extra SSH round trip).",
file=sys.stderr,
flush=True,
)
code2, out2, err2 = ssh_forward_capture(ssh_host, ["discover"], timeout=120)
out2 = out2 or ""
if code2 != 0:
print(
f"discover on {ssh_host!r} failed (exit {code2}): {(err2 or out2).strip()[:400]}",
file=sys.stderr,
flush=True,
)
maybe_hint_remote_ssh_python(code2, out2, err2)
if code != 0 or (out + err).strip():
print(
f"calibrate-ports-json on {ssh_host!r} (exit {code}): {(err or out).strip()[:400]}",
file=sys.stderr,
flush=True,
)
maybe_hint_remote_ssh_python(code, out, err)
return []
pairs = parse_discover_stdout_for_calibrate_ports(out2)
if not pairs:
print(
f"Could not parse hub/port list from discover on {ssh_host!r}. "
"Ensure the Pi connects to its hubs (udev 24ff) and discover prints the Hub|Serial|Ports table.",
file=sys.stderr,
flush=True,
)
return []
print(
f"Using discover output for {ssh_host!r} ({len(pairs)} hub.port step(s)).",
file=sys.stderr,
flush=True,
)
return pairs
def remote_hub_port_power(ssh_host, hub_1, port_0, enable):
sub = "on" if enable else "off"
code, out, err = ssh_forward_capture(ssh_host, [sub, f"{hub_1}.{port_0}"])
blob = "\n".join(x.strip() for x in (out or "", err or "") if x and x.strip()).strip()
return code, blob
def remote_port_power_feedback(ssh_host, hub_1, port_0):
code, out, err = ssh_forward_capture(ssh_host, ["status", f"{hub_1}.{port_0}"])
if code != 0:
return f"remote status failed ({code}): {(err or out).strip()[:120]}"
pwr, cur = parse_status_line_for_hub_port(out, hub_1, port_0)
return f"remote hub reports {pwr}, {cur} mA"
def remote_lsusb_lines(ssh_host):
"""Full lsusb output lines on the SSH host (hub_manager lsusb-lines-json or plain `ssh … lsusb`)."""
code, out, err = ssh_forward_capture(ssh_host, ["lsusb-lines-json"], timeout=45)
if code == 0 and out.strip():
try:
data = json.loads(out.strip())
if isinstance(data, list) and all(isinstance(x, str) for x in data):
return data
except json.JSONDecodeError:
pass
apply_remote_ssh_env_file()
ssh_bin = os.environ.get("HUB_MANAGER_SSH_BIN", "ssh")
extra = shlex.split(os.environ.get("HUB_MANAGER_SSH_OPTS", ""))
try:
proc = subprocess.run(
[ssh_bin, *extra, ssh_host, "lsusb"],
capture_output=True,
text=True,
timeout=45,
)
except (OSError, subprocess.TimeoutExpired):
return []
if proc.returncode != 0 or not proc.stdout:
return []
return proc.stdout.splitlines()
def parse_status_line_for_hub_port(stdout, hub_1, port_0):
"""Parse `status hub.port` table output for one port; return (power, current_str) or (?, ?)."""
for line in stdout.splitlines():
ln = line.strip()
if not ln or ln.startswith("-") or "Identity" in ln or "Hub" not in ln:
continue
parts = [p.strip() for p in line.split("|")]
if len(parts) < 4:
continue
hub_part = parts[0].replace("Hub", "", 1).strip()
try:
h = int(hub_part)
p = int(parts[1])
except ValueError:
continue
if h == hub_1 and p == port_0:
return parts[2], parts[3]
return "?", "?"

View File

@ -1,26 +0,0 @@
[
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]

View File

@ -1,86 +0,0 @@
[
{
"hub": 1,
"port": 0
},
{
"hub": 1,
"port": 1
},
{
"hub": 1,
"port": 2
},
{
"hub": 1,
"port": 3
},
{
"hub": 1,
"port": 4
},
{
"hub": 1,
"port": 5
},
{
"hub": 1,
"port": 6
},
{
"hub": 1,
"port": 7
},
{
"hub": 2,
"port": 0
},
{
"hub": 2,
"port": 1
},
{
"hub": 2,
"port": 2
},
{
"hub": 2,
"port": 3
},
{
"hub": 2,
"port": 4
},
{
"hub": 2,
"port": 5
},
{
"hub": 2,
"port": 6
},
{
"hub": 2,
"port": 7
},
{
"hub": 3,
"port": 0
},
{
"hub": 3,
"port": 1
},
{
"hub": 3,
"port": 2
},
{
"hub": 3,
"port": 3
},
null,
null,
null,
null
]

View File

@ -1,21 +1,20 @@
# Copy to remote_ssh.env (same folder as hub_manager.py on the machine where you RUN commands, e.g. Fedora). # Copy to remote_ssh.env (same directory as fiwi.py on the machine where you run SSH from).
# #
# CRITICAL: HUB_MANAGER_REMOTE_PYTHON and HUB_MANAGER_REMOTE_SCRIPT must be real paths # Paths below must exist on the *remote* host (e.g. Pi), not on your workstation.
# that exist ON THE RASPBERRY PI (after you ssh in), NOT on Fedora.
# #
# On the Pi, in the venv where you `pip install brainstem`: # On the Pi, in the venv with brainstem:
# which python3 # which python3
# realpath /where/you/put/hub_manager.py # realpath /path/to/fiwi.py
# Paste those outputs below. Do not use placeholder paths like "your-venv".
#
# Environment variables override these lines if both are set.
HUB_MANAGER_REMOTE_PYTHON=/home/rjmcmahon/Code/acroname/env/bin/python3 FIWI_REMOTE_PYTHON=/home/pi/venv/bin/python3
HUB_MANAGER_REMOTE_SCRIPT=/home/rjmcmahon/Code/acroname/hub_manager.py FIWI_REMOTE_SCRIPT=/home/pi/FiWiManager/fiwi.py
# Optional: comma-separated Pi (or other) hosts for hybrid calibrate (local + remote in one run): # Optional: comma-separated hosts for hybrid panel calibrate (with local hubs in one run):
# python3 hub_manager.py panel calibrate # FIWI_CALIBRATE_REMOTES=pi@192.168.1.50,pi@192.168.1.51
# HUB_MANAGER_CALIBRATE_REMOTES=pi@192.168.1.50,pi@192.168.1.51
# #
# Remotes still need the two lines above: without them, SSH uses `python3` on the Pi and you get # Optional SSH client tweaks:
# "No module named 'brainstem'" and 0 remote calibrate steps. # FIWI_SSH_BIN=ssh
# FIWI_SSH_OPTS=-o BatchMode=yes
#
# Optional: deferred remote calls (spawn ssh children; panel calibrate overlaps SSH via Popen):
# FIWI_REMOTE_DEFER=1