kstonekuan 6 hours ago

I was curious how people were choosing between voice AI agent providers and was interested in comparing them by baseline network performance, but could not find any existing solution that benchmarks performance before STT/LLM/TTS processing. So I started building a benchmarking tool to compare Pipecat (Daily) vs LiveKit.

Unlike your average LLM benchmark, this benchmark focuses on location and time as variables since these are the biggest factors for networking systems (I was a developer for networking tools in a past life). The idea is to run benchmarks from multiple geographic locations over time to see how each platform performs under different conditions.

Basic setup: echo agent servers can create and connect to temporary rooms to echo back after receiving messages. Since Pipecat (Daily) and LiveKit Python SDKs can't coexist in the same process, I have to run separate agent processes on different ports. Benchmark runner clients send pings over WebRTC data channels and measure RTT for each message. Raw measurements get stored in InfluxDB, then the dashboard calculates aggregate stats (P50/P95/P99, jitter, packet loss) and visualizes everything with filters and side-by-side comparisons.

I struggled with creating a fair comparison since each platform has different APIs. Ended up using data channels (not audio) for consistency, though this only measures data message transport, not the full audio pipeline (codecs, jitter buffers, etc).

Latency is hard to measure precisely, so I'm estimating based on server processing time - admittedly not perfect. Only testing data channels, not full audio path. And it's just Pipecat (Daily) and LiveKit for now, would like to add Agora, etc.

The README screenshot shows synthetic data resembling early results. Not posting raw results yet since I'm still working out some measurement inaccuracies and need more data points across locations over time to draw solid conclusions.

This is functional but rough around the edges. Happy to keep building it out if people find it useful. Any ideas on better methodology for fair comparisons or improving measurements? What platforms would you want to see added?

Stack: Python, TypeScript (React), InfluxDB